In recent years, the AI community has seen a surge in large language model (LLM) development. The focus is now shifting towards Small Language Models (SLMs) due to their practicality. Notably, MobiLlama, a 0.5 billion parameter SLM, excels in performance and efficiency with its innovative architecture. Its open-source nature fosters collaboration and innovation in AI…
Researchers are making strides in protein structure prediction, crucial for understanding biological processes and diseases. While traditional models excel in predicting single structures, they struggle with the dynamic range of proteins. A new method, AlphaFLOW, integrates flow matching with predictive models to generate diverse protein structure ensembles, promising a deeper understanding of protein dynamics and…
Researchers from the University of Michigan and Apple have developed a groundbreaking approach to enhance the efficiency of large language models (LLMs). By distilling the decomposition phase of LLMs into smaller models, they achieved notable reductions in computational demands while maintaining high performance across various tasks. This innovation promises cost savings and increased accessibility to…
Intent-based Prompt Calibration (IPC) automates prompt engineering by fine-tuning prompts based on user intention using synthetic examples, achieving superior results with minimal data and iterations. The modular approach allows for easy adaptation to various tasks and addresses data bias and imbalance issues. IPC proves effective in tasks like moderation and generation, outperforming other methods.
Microsoft researchers introduced ViSNet, a method enhancing predictions of molecular properties and molecular dynamics simulations. This vector-scalar interactive graph neural network framework improves molecular geometry modeling and encodes molecular interactions efficiently. ViSNet outperforms existing algorithms in various datasets, offering promise for revolutionizing computational chemistry and biophysics. For further details, refer to the paper and blog.
Large Language Models (LLMs) have enhanced Natural Language Processing (NLP) applications, but struggle with longer texts. A new framework, Dual Chunk Attention (DCA), developed by researchers from The University of Hong Kong, Alibaba Group, and Fudan University, overcomes this limitation. DCA’s innovative attention mechanisms and integration with Flash Attention significantly improve LLMs’ capacity without extra…
The success of large language models relies on extensive text datasets for pre-training. However, indiscriminate data use may not be optimal due to varying quality. Data selection methods are crucial for optimizing training datasets and reducing costs. Researchers proposed a unified framework for data selection, emphasizing the need to understand selection mechanisms and utility functions.
The Claude 3 model family from Anthropic introduces a new era in AI with its enhanced cognitive performance. These models, such as Claude 3 Opus, excel in understanding complex tasks, processing speed, and generating nuanced text. Their sophisticated algorithms and versatility address key challenges, marking a significant leap in AI capabilities.
The quest to enhance human-computer interaction has led to significant strides in automating tasks. OmniACT, a groundbreaking dataset and benchmark, integrates visual and textual data to generate precise action scripts for a wide range of functions. However, the current gap between autonomous agents and human efficiency underscores the complexity of automating computer tasks. This research…
The method of Image Quality Assessment (IQA) standardizes image evaluation by incorporating subjective studies and large multimodal models (LMMs). LMMs capture nuanced understanding of data, improving performance across tasks. Researchers from multiple universities proposed Co-Instruct, a dataset for open-ended multi-image quality comparison, resulting in significant improvements over existing LMMs. This revolutionizes image quality assessment.
Qualcomm AI Research introduces GPTVQ, a method utilizing vector quantization to enhance efficiency and accuracy trade-offs in large language models (LLMs). It addresses challenges of parameter counts, offering superior results in processing and reducing model size. The study underscores GPTVQ’s potential for real-world applications and advancing the accessibility of LLMs, marking a significant advancement in…
ChunkAttention, a novel technique developed by a Microsoft team, optimizes the efficiency of large language models’ self-attention mechanism by employing a prefix-aware key/value (KV) cache system and a two-phase partition algorithm. It significantly improves inference speed, achieving a 3.2 to 4.8 times speedup compared to existing state-of-the-art implementations, addressing memory and computational speed challenges in…
Microsoft and NVIDIA’s latest advancements in AI are transforming industries. AI’s use cases include healthcare, virtual assistants, fraud detection, and more. Microsoft offers new AI services like Azure AI Studio and Azure Boost, along with infrastructure enhancements like custom AI chips and new virtual machine series. Attend NVIDIA GTC to explore these innovations.
Recent research has focused on artificial multimodal representation learning, particularly in the integration of tactile perception. Touch-vision-language (TVL) dataset and benchmark have been introduced by UC Berkeley, Meta AI, and TU Dresden, aiming to advance touch digitization and robotic touch applications. The proposed methodology demonstrates significant improvements over existing models, benefitting pseudo-label-based learning methods and…
Researchers from CoAI Group, Tsinghua University, and Microsoft Research propose a theory for optimizing language model (LM) learning, emphasizing maximizing data compression ratio. They derive the Learning Law theorem, validated in experiments, showing equal contribution of examples to optimal learning. Optimized process improves LM scaling law coefficients, promising faster LM training with practical significance.
Yuri Burda and Harri Edwards of OpenAI experimented with training a large language model to do basic arithmetic, discovering unexpected behaviors like grokking and double descent. These odd phenomena challenge classical statistics and highlight the mysterious nature of deep learning. Understanding these behaviors could unlock the next generation of AI and mitigate potential risks.
Large language models (LLMs) have advanced machine understanding and text generation. Conventional probability-based evaluations are critiqued for not capturing LLMs’ full abilities. A new generation-based evaluation method has been proposed, proving more realistic and accurate in assessing LLMs. It challenges current standards and calls for evolved evaluation paradigms to reflect true LLM potential and limitations.
Recent research has proposed a method to expand context windows in transformers using recurrent memory, addressing limitations of computing scalability. The team introduced the BABILong framework for NLP model evaluation in handling lengthy dispersed data, achieving a new record for the largest sequence size handled by a single model and analyzing GPT-4 and RAG on…
Recent developments in vision-language models have led to advanced AI assistants capable of understanding text and images. However, these models face limitations such as task diversity and data bias. To address these challenges, researchers have introduced VISION-FLAN, a diverse dataset for fine-tuning VLMs, yielding impressive results and emphasizing the importance of diversity and human-centeredness in…
TOWER, an innovative open-source multilingual Large Language Model, addresses the increasing demand for effective translation across languages. Developed through collaborative efforts, it encompasses a base model trained on extensive multilingual data and a fine-tuning phase for task-specific proficiency. TOWER’s superior performance challenges the dominance of closed-source models, revolutionizing translation technology and setting a new benchmark…