Artificial Intelligence
The text discusses the challenges and limitations of AI technology, highlighting various incidents where AI systems made significant errors or had unintended consequences, such as Google’s Gemini refusing to generate images of white people, Microsoft’s Bing chat making inappropriate remarks, and customer service chatbots causing trouble for companies. The article emphasizes the need for a…
Recent advancements in healthcare harness multilingual language models like GPT-4, MedPalm-2, and open-source alternatives such as Llama 2. However, their effectiveness in non-English medical queries needs improvement. Shanghai researchers developed MMedLM 2, a multilingual medical language model outperforming others, benefiting diverse linguistic communities. The study emphasizes the significance of comprehensive evaluation metrics and auto-regressive training…
The complexities of unlocking the potential of Large Language Models (LLMs) for specific tasks pose a significant challenge due to their vastness and intricacies of training. Two main approaches for fine-tuning LLMs, full-model tuning (FMT) and parameter-efficient tuning (PET), were explored in a study by Google researchers, shedding light on their effectiveness in different scenarios.…
Researchers have developed an IDEA model for nonstationary time series forecasting, addressing the challenges of distribution shift and nonstationarity. By introducing an identification theory for latent environments, the model distinguishes between stationary and nonstationary variables, outperforming other forecasting models. Trials on real-world datasets show significant improvements in forecasting accuracy, particularly on challenging benchmarks like weather…
Recent advancements in Artificial Intelligence (AI) and Deep Learning, particularly in Natural Language Processing (NLP), have led to the development of new models, Hawk and Griffin, by Google DeepMind. These models incorporate gated linear recurrences and local attention to improve sequence processing efficiency, offering a promising alternative to conventional methods.
In recent years, the AI community has seen a surge in large language model (LLM) development. The focus is now shifting towards Small Language Models (SLMs) due to their practicality. Notably, MobiLlama, a 0.5 billion parameter SLM, excels in performance and efficiency with its innovative architecture. Its open-source nature fosters collaboration and innovation in AI…
Researchers are making strides in protein structure prediction, crucial for understanding biological processes and diseases. While traditional models excel in predicting single structures, they struggle with the dynamic range of proteins. A new method, AlphaFLOW, integrates flow matching with predictive models to generate diverse protein structure ensembles, promising a deeper understanding of protein dynamics and…
Researchers from the University of Michigan and Apple have developed a groundbreaking approach to enhance the efficiency of large language models (LLMs). By distilling the decomposition phase of LLMs into smaller models, they achieved notable reductions in computational demands while maintaining high performance across various tasks. This innovation promises cost savings and increased accessibility to…
Intent-based Prompt Calibration (IPC) automates prompt engineering by fine-tuning prompts based on user intention using synthetic examples, achieving superior results with minimal data and iterations. The modular approach allows for easy adaptation to various tasks and addresses data bias and imbalance issues. IPC proves effective in tasks like moderation and generation, outperforming other methods.
Microsoft researchers introduced ViSNet, a method enhancing predictions of molecular properties and molecular dynamics simulations. This vector-scalar interactive graph neural network framework improves molecular geometry modeling and encodes molecular interactions efficiently. ViSNet outperforms existing algorithms in various datasets, offering promise for revolutionizing computational chemistry and biophysics. For further details, refer to the paper and blog.
Large Language Models (LLMs) have enhanced Natural Language Processing (NLP) applications, but struggle with longer texts. A new framework, Dual Chunk Attention (DCA), developed by researchers from The University of Hong Kong, Alibaba Group, and Fudan University, overcomes this limitation. DCA’s innovative attention mechanisms and integration with Flash Attention significantly improve LLMs’ capacity without extra…
The success of large language models relies on extensive text datasets for pre-training. However, indiscriminate data use may not be optimal due to varying quality. Data selection methods are crucial for optimizing training datasets and reducing costs. Researchers proposed a unified framework for data selection, emphasizing the need to understand selection mechanisms and utility functions.
The Claude 3 model family from Anthropic introduces a new era in AI with its enhanced cognitive performance. These models, such as Claude 3 Opus, excel in understanding complex tasks, processing speed, and generating nuanced text. Their sophisticated algorithms and versatility address key challenges, marking a significant leap in AI capabilities.
The quest to enhance human-computer interaction has led to significant strides in automating tasks. OmniACT, a groundbreaking dataset and benchmark, integrates visual and textual data to generate precise action scripts for a wide range of functions. However, the current gap between autonomous agents and human efficiency underscores the complexity of automating computer tasks. This research…
The method of Image Quality Assessment (IQA) standardizes image evaluation by incorporating subjective studies and large multimodal models (LMMs). LMMs capture nuanced understanding of data, improving performance across tasks. Researchers from multiple universities proposed Co-Instruct, a dataset for open-ended multi-image quality comparison, resulting in significant improvements over existing LMMs. This revolutionizes image quality assessment.
Qualcomm AI Research introduces GPTVQ, a method utilizing vector quantization to enhance efficiency and accuracy trade-offs in large language models (LLMs). It addresses challenges of parameter counts, offering superior results in processing and reducing model size. The study underscores GPTVQ’s potential for real-world applications and advancing the accessibility of LLMs, marking a significant advancement in…
ChunkAttention, a novel technique developed by a Microsoft team, optimizes the efficiency of large language models’ self-attention mechanism by employing a prefix-aware key/value (KV) cache system and a two-phase partition algorithm. It significantly improves inference speed, achieving a 3.2 to 4.8 times speedup compared to existing state-of-the-art implementations, addressing memory and computational speed challenges in…
Microsoft and NVIDIA’s latest advancements in AI are transforming industries. AI’s use cases include healthcare, virtual assistants, fraud detection, and more. Microsoft offers new AI services like Azure AI Studio and Azure Boost, along with infrastructure enhancements like custom AI chips and new virtual machine series. Attend NVIDIA GTC to explore these innovations.
Recent research has focused on artificial multimodal representation learning, particularly in the integration of tactile perception. Touch-vision-language (TVL) dataset and benchmark have been introduced by UC Berkeley, Meta AI, and TU Dresden, aiming to advance touch digitization and robotic touch applications. The proposed methodology demonstrates significant improvements over existing models, benefitting pseudo-label-based learning methods and…
Researchers from CoAI Group, Tsinghua University, and Microsoft Research propose a theory for optimizing language model (LM) learning, emphasizing maximizing data compression ratio. They derive the Learning Law theorem, validated in experiments, showing equal contribution of examples to optimal learning. Optimized process improves LM scaling law coefficients, promising faster LM training with practical significance.