Large language model
Google DeepMind researchers have developed Multistep Consistency Models, merging them with TRACT and Consistency Models to narrow the performance gap between standard diffusion and few-step sampling. The method offers a trade-off between sample quality and speed, achieving superior performance in just eight steps, improving efficiency in generative modeling tasks.
ELLA, a new method discussed in a Tencent AI paper, enhances text-to-image diffusion models by integrating powerful Large Language Models (LLMs) without requiring retraining. It improves comprehension of intricate prompts by introducing the Timestep-Aware Semantic Connector (TSC) and effectively addressing dense prompts. ELLA promises significant advancement in text-to-image generation without extensive retraining. For more details,…
Research in 3D generative AI has led to a fusion of 3D generation and reconstruction, notably through innovative methods like DreamFusion and the TripoSR model. TripoSR, developed by Stability AI and Tripo AI, uses a transformer architecture to rapidly generate 3D models from single images, offering significant advancements in AI, computer vision, and computer graphics.
A groundbreaking approach called Strongly Supervised pre-training with ScreenShots (S4) is introduced to enhance Vision-Language Models (VLMs) by leveraging web screenshots. S4 significantly boosts model performance across various tasks, demonstrating up to 76.1% improvement in Table Detection. Its innovative pre-training framework captures diverse supervisions embedded within web pages, advancing the state-of-the-art in VLMs.
Recent studies have highlighted the advancements in Vision-Language Models (VLMs), exemplified by OpenAI’s GPT4-V. These models excel in vision-language tasks like captioning, object localization, and visual question answering. Apple researchers assessed VLM limitations in complex visual reasoning using Raven’s Progressive Matrices, revealing discrepancies and challenges in tasks involving visual deduction. The evaluation approach, inference-time techniques,…
Advancements in large language models (LLMs) have impacted various fields, yet the legal domain lags behind. Equall.ai’s researchers introduce SaulLM-7B, a public legal LLM specialized for legal text, leveraging extensive pretraining on dedicated legal corpora. It outperforms non-legal models on legal-specific tasks, presenting opportunities for further enhancement in conclusion tasks. Full paper available here.
AI’s pervasive role has raised concerns about the amplification of biases. A recent study reveals covert racism in language models, particularly in their negative associations with African American English (AAE) speakers. The research emphasizes the pressing need for novel strategies to address linguistic prejudice and ensure equitable AI technology. Read the full post on MarkTechPost.
Peking University and Alibaba Group developed FastV to tackle inefficiencies in Large Vision-Language Models’ attention computation. FastV dynamically prunes less relevant visual tokens, significantly reducing computational costs without compromising performance. This improves the computational efficiency and practical deployment of LVLMs, offering a promising solution to resource constraints in real-world applications.
Researchers have encountered significant challenges in developing drugs for Idiopathic Pulmonary Fibrosis and renal fibrosis due to their complex pathogenesis and lack of effective treatments. However, utilizing AI, they identified TNIK as a promising anti-fibrotic target and developed the inhibitor INS018_055, showing favorable properties and efficacy in preclinical and clinical studies. This innovative approach offers…
The demand for advanced, scalable, and versatile tools in software development continues to grow. Meeting these demands requires overcoming significant challenges such as handling vast amounts of data and providing flexible, user-friendly interfaces. C4AI Command-R, a groundbreaking 35-billion parameter generative model developed by Cohere and Cohere For AI, effectively addresses these challenges with its unique…
In data science and AI, embedding entities into vector spaces enables numerical representation, but a study by Netflix Inc. and Cornell University challenges the reliability of cosine similarity, revealing its potential for arbitrary and misleading results. Regularization impacts similarity outcomes, highlighting the need to critically evaluate such metrics and consider alternative approaches.
The Large Language Models (LLMs) have remarkable capabilities in various domains like content generation, question-answering, and mathematical problem-solving, challenging the need for extensive pre-training. A recent study demonstrates that the LLaMA-27B model displays outstanding mathematical abilities and proposes a supervised fine-tuning method to enhance accuracy, offering insights into scaling behaviors. The study’s findings suggest that…
We’ve teamed up with Le Monde and Prisa Media to provide French and Spanish news content for ChatGPT.
Google DeepMind has developed a new AI agent named SIMA, which can play various games, including those it has never encountered before, such as Goat Simulator 3. The agent can follow text commands to play seven different games and navigate in 3D environments, showing potential for more generalized AI and skill transfer across multiple environments.
Summary: SIMA is a Scalable Instructable Multiworld Agent being introduced.
DeepSeek-AI introduces DeepSeek-VL, an open-source Vision-Language (VL) Model. It bridges the gap between visual data and natural language, showcasing a comprehensive approach to data diversity and innovative architecture. Performance evaluations highlight its exceptional capabilities, marking pivotal advancements in artificial intelligence. This model propels the understanding and application of vision-language models, paving the way for new…
01.AI has introduced the Yi model family, a significant advancement in artificial intelligence. The models demonstrate a strong ability to understand and process language and visual information, bridging the gap between the two. With a focus on data quality and innovative model architectures, the Yi series has shown remarkable performance and practical deployability on consumer-grade…
Researchers have developed an innovative framework leveraging AI to seamlessly integrate visual and audio content creation. By utilizing existing pre-trained models like ImageBind, they established a shared representational space to generate harmonious visual and aural content. The approach outperformed existing models, showcasing its potential in advancing AI-driven multimedia creation. Read more on MarkTechPost.
Researchers from The Chinese University of Hong Kong, Microsoft Research, and Shenzhen Research Institute of Big Data introduce MathScale, a scalable approach utilizing cutting-edge LLMs to generate high-quality mathematical reasoning data. This method addresses dataset scalability and quality issues and demonstrates state-of-the-art performance, outperforming equivalent-sized peers on the MWPBENCH dataset. For more details, see the…
Multimodal Large Language Models (MLLMs), especially those integrating language and vision modalities (LVMs), are revolutionizing various fields with their high accuracy, generalization capability, and robust performance. MiVOLOv2, a state-of-the-art model for gender and age determination, outperforms general-purpose MLLMs in age estimation. The research paper evaluates the potential of neural networks, including LLaVA and ShareGPT.