Large language model
New York University researchers trained an AI system using 60 hours of first-person video recordings from children aged 6 months to 2 years. The AI employed self-supervised learning to understand actions and changes like a child. The study’s findings suggest AI can efficiently learn from limited, targeted data, challenging conventional AI training methods.
Researchers work to optimize large language models (LLMs) like GPT-3, which demand substantial GPU memory. Existing quantization techniques have limitations, but a new system design, TC-FPx, and FP6-LLM provide a breakthrough. FP6-LLM significantly enhances LLM performance, allowing single-GPU inference of complex models with higher throughput, representing a major advancement in AI deployment. For more details,…
Auto-regressive decoding in large language models (LLMs) is time-consuming and costly. Speculative sampling methods aim to solve this issue by speeding up the process, with EAGLE being a notable new framework. It operates at the feature level and demonstrates faster and more accurate draft accuracy compared to other systems. EAGLE improves LLM throughput and can…
Nightshade, a tool from the University of Chicago, gained over 250,000 downloads within five days of its release. It combats unauthorized use of artwork by AI models by poisoning them at the pixel level, rendering them unable to replicate images accurately. The team is overwhelmed by its success, with potential future integration and cloud hosting.
US lawmakers have proposed the DEFIANCE Act to address the growing problem of AI-generated explicit images. Prompted by a series of deep fake AI-generated images of Taylor Swift, the bipartisan bill aims to empower individuals to sue for damages if they are depicted in “digital forgeries” without consent. This legislation expands the legal framework to…
Mastercard has developed a new generative AI fraud detection tool, called Decision Intelligence Pro (DI Pro), powered by a recurrent neural network. It analyzes cardholders’ purchasing histories and scans data points to predict transaction authenticity in less than 50 milliseconds. Initial modeling suggests a potential 20-300% boost in fraud detection rates. The tool is expected…
This week’s AI news features the following highlights: 1. Taylor Swift’s battle against explicit AI deep fake images and the concerning ease of generating such content using AI. 2. The rise of political deep fakes showcasing AI’s capabilities in replicating voices with convincing realism and the challenges of detecting these fakes. 3. OpenAI’s evolving transparency…
The CMMMU benchmark has been introduced to bridge the gap between powerful Large Multimodal Models (LMMs) and expert-level artificial intelligence in tasks involving complex perception and reasoning with domain-specific knowledge. It comprises 12,000 Chinese multimodal questions across six core disciplines and employs a rigorous data collection and quality control process. The benchmark evaluates LMMs, presents…
The integration of large language models (LLMs) in software development has revolutionized code intelligence, automating aspects of programming and increasing productivity. Disparities between open-source and closed-source models have hindered accessibility and democratization of advanced coding tools. DeepSeek-AI and Peking University’s DeepSeek-Coder series addresses this gap, enhancing open-source models’ functionality and performance, marking a significant advancement…
AgentBoard, developed by researchers from multiple Chinese universities, presents a benchmark framework and toolkit for evaluating LLM agents. It addresses challenges in assessing multi-round interactions and diverse scenarios in agent tasks. With a fine-grained progress rate metric and interactive visualization, it illuminates the capabilities and limitations of LLM agents across varied environments.
The researchers from The Chinese University of Hong Kong and Tencent AI Lab introduce the Multimodal Pathway Transformer (M2PT) to enhance transformer performance by incorporating irrelevant data from other modalities, resulting in substantial performance improvements across various recognition tasks. The approach involves Cross-Modal Re-parameterization and demonstrates tangible implementation of auxiliary weights without incurring inference costs.
The demand for AI is challenging environmental sustainability, as it significantly increases electricity consumption. Data centers, particularly those supporting generative AI, strain global energy infrastructure. The rising electricity demands from AI and data centers are creating environmental and grid stability concerns, urging the need for more sustainable practices within the AI industry and alternative energy…
Recent advancements in large language models (LLMs) like Chat-GPT and LLaMA-2 have led to an exponential increase in parameters, posing challenges in inference delay. To address this, Intellifusion Inc. and Harbin Institute of Technology propose Bi-directional Tuning for lossless Acceleration (BiTA) to expedite LLMs, achieving significant speedups without compromising output quality. (50 words)
Researchers from the University of Surrey have used AI to improve carbon capture technology. By employing AI algorithms, they achieved a 16.7% increase in CO2 capture and reduced energy usage by 36.3%. The system employed packed bubble column reactor and machine learning techniques to optimize performance. This study demonstrates the potential of AI in creating…
Researchers from UC Berkeley and UCSF have introduced Cross-Attention Masked Autoencoders (CrossMAE) in computer vision, aiming to enhance processing efficiency for visual data. By leveraging cross-attention exclusively for decoding masked patches, CrossMAE simplifies and expedites the decoding process, achieving substantial computational reduction while maintaining quality and performance in complex tasks. This research presents a groundbreaking…
RAND and OpenAI issued conflicting reports on the possibility of using AI for bioweapon development. OpenAI’s study, involving biology experts and internet access, found that access to a research version of GPT-4 may enhance the ability to access biological threat information but emphasized that information access alone is insufficient for bioweapon creation. The study concluded…
On February 1, 2024, AI-related companies suffered a significant setback, collectively losing $190 billion in market value after disappointing quarterly results from major players such as Microsoft, Alphabet, and AMD. The drop in stock prices was driven by unmet investor expectations following the recent AI boom, signaling challenges ahead despite high hopes for the technology’s…
High-throughput computational screening and ML algorithms enable scientists to surpass traditional limitations, facilitating dynamic material exploration. This approach has led to the discovery of new materials with unique properties, signifying a significant advancement in material discovery.
OK-Robot system developed by researchers from NYU and Meta can train robots to pick up and move objects in new settings utilizing an open-source AI object detection model. Testing in homes, the robot successfully completed tasks in 58.5% of cases, rising to 82% in less cluttered rooms. The use of open-source AI models presents both…
Understanding the decision-making processes of Large Language Models (LLMs) is crucial for mitigating potential risks in high-stakes applications. A study by researchers from MIT and the University of Cambridge explores the universality of individual neurons in GPT2 language models, revealing that only a small percentage exhibit universality. The findings provide insights into the development of…