Artificial Intelligence
API-BLEND is a novel dataset that addresses the challenge of integrating APIs into Large Language Models (LLMs) to enhance AI systems. It includes diverse, real-world training data and emphasizes sequencing tasks. Empirical evaluations demonstrate its superiority in training and benchmarking LLMs for API integration, fostering better out-of-domain generalization and performance in complex tasks through conversational…
The development of reinforcement learning (RL) techniques, particularly in the context of large language models (LLMs), has led to a groundbreaking framework called ArCHer. This innovative hierarchical structure revolutionizes multi-turn decision-making, enabling LLMs to optimize strategies and execute actions effectively, thus significantly advancing the realm of artificial intelligence.
Large language models (LLMs) trained on extensive text data exhibit impressive abilities across various tasks, challenging the traditional benchmarks. Studies by MIT and others show that when LLMs utilize collective intelligence, they can compete with human crowd-based methods in forecasting, offering practical benefits for real-world applications. This signifies a potential for broader societal use of…
Occiglot introduces Model Release v0.1, focusing on European language modeling to address underrepresentation by major players. Emitting open-source 7B model checkpoints for English, German, French, Spanish, and Italian, it emphasizes continual pre-training and instruction tuning, supporting linguistic diversity and cultural nuances. The initiative aims to democratize language models and align with European values.
The development of FlexLLM addresses a critical bottleneck in deploying large language models by offering a more resource-efficient framework for their finetuning and inference tasks. This system enhances computational efficiency, promising to broaden the accessibility and applicability of advanced natural language processing technologies. FlexLLM represents a significant advancement in the field, optimizing LLM deployment and…
Large Vision-Language Models (LVLMs), such as GPT-4, exhibit exceptional proficiency in real-world image tasks but struggle with abstract concepts. The introduction of Multimodal ArXiv, including ArXivCap with millions of scientific images and captions, aims to enhance LVLMs’ scientific understanding. ArXivQA, with 100,000 questions, further improves LVLMs’ reasoning abilities. LVLMs still face challenges in accurately interpreting…
Advancements in video generation technology using AI have the potential to revolutionize industries. Challenges in achieving high-quality outputs and managing computational costs have limited accessibility. However, the development of Open-Sora by the Colossal-AI team addresses these challenges, marking a significant advancement in the field. This open-source library offers an efficient and cost-effective solution, making high-quality…
Recent advancements in language technology have led to the development of Large Language Models (LLMs) with remarkable zero-shot capabilities. Researchers from Brown University have introduced Bonito, an open-source model that converts unannotated text into task-specific instruction-tuning datasets, enhancing the performance of pretrained models in specialized domains. Bonito demonstrates strong potential for language model adaptation in…
Sailor, a suite of language models by Sea AI Lab and Singapore University of Technology and Design, caters to the intricate linguistic diversity of Southeast Asia. Its meticulous data handling equips it for accurate text generation and comprehension across languages like Indonesian, Thai, Vietnamese, Malay, and Lao. Pretrained on a vast corpus, Sailor sets new…
IBM Research has developed SimPlan, a hybrid approach that enhances large language models’ (LLMs) planning capabilities by integrating classical planning strategies. This innovative method addresses LLMs’ limitations in planning tasks and outperforms traditional LLM-based planners, showcasing its potential to revolutionize AI applications in decision-making and problem-solving across diverse industries.
Based is a groundbreaking language model introduced by researchers from Stanford University, University at Buffalo, and Purdue University. It integrates linear and sliding window attention to balance recall and efficiency in processing vast amounts of information. With IO-aware algorithms, Based achieves unparalleled efficiency and superior recall capabilities, setting a new standard for language models in…
Univ. of Oxford & Univ. College London present Craftax, a JAX-based RL benchmark outperforming others in speed. It offers Craftax-Classic, solvable by a basic PPO agent in 51 mins, encouraging higher timesteps gain. Despite disappointing existing approaches, Craftax aims to facilitate RL research with limited resources. Craftax-Classic serves as an entry point for Crafter users.
StarCoder2, an advanced code generation model, derives from the BigCode project, led by researchers from 30+ institutions. Trained on a vast dataset including GitHub repositories, it offers models of varying sizes (3B, 7B, 15B) with exceptional performance in code generation. The project prioritizes transparency, releasing model weights and training data details to encourage collaboration and…
Intersection of AI and arts, particularly music, is a significant study due to its impact on human creativity, with researchers focusing on creating music through language models. Skywork AI and Hong Kong University developed ChatMusician, outperforming GPT-4, but facing challenges in music variety and open-ended tasks. The open-source project aims to spur cooperation in this…
Salesforce AI Researchers introduced the SFR-Embedding-Mistral model to improve text-embedding models for natural language processing (NLP) tasks. It leverages multi-task training, task-homogeneous batching, and hard negatives to enhance performance significantly, particularly in retrieval tasks. The model demonstrates state-of-the-art results across diverse NLP benchmarks.
The emergence of Large Language Models (LLMs) like GPT and LLaMA has prompted a growing need for proprietary LLMs, but their resource-intensive development remains a challenge. FUSECHAT, a novel chat-based LLM integration approach, leverages knowledge fusion techniques and the VARM merging method to outperform individual models and fine-tuned baselines. It offers a practical and efficient…
A novel framework called CyberDemo is introduced to address the challenges in robotic manipulation. It leverages simulated human demonstrations, remote data collection, and simulator-exclusive data augmentation to enhance task performance and surpass the limitations of real-world data. CyberDemo demonstrates significant improvements in manipulation tasks and outperforms traditional methods, showcasing the untapped potential of simulation data.
The integration of advanced technological tools is increasingly essential in urban planning, particularly with the emergence of specialized large language models like PlanGPT. Developed by researchers, PlanGPT offers a customized solution for urban and spatial planning, outperforming existing models by improving precision and relevance in tasks essential for urban planning professionals.
Recent advancements in AI and deep learning have led to significant progress in generative modeling. Autoregressive and diffusion models have limitations in text generation, but the new SEDD model challenges these, offering high-quality and controlled text production. It competes with autoregressive models like GPT-2, showing promise in NLP generative modeling. [50 words]