Meta AI has introduced “Relightable Gaussian Codec Avatars,” a revolutionary method for achieving high-fidelity relighting of dynamic 3D head avatars. The approach relies on a 3D Gaussian geometry model and a learnable radiance transfer appearance model to capture sub-millimeter details and enable real-time relighting. This innovation elevates the realism and interactivity of avatar animation, marking…
Brain organoids, lab-grown mini-brains created from human stem cells, have been integrated with computers to achieve speech recognition. This innovative “Brainoware” system, described in a study in Nature Electronics, represents a shift from traditional AI using silicon chips. Despite challenges, its potential for creating energy-efficient AI hardware with human brain-like functionality is evident.
A University of Warwick study unveils an AI system, X-Raydar, trained on 2.8 million chest X-rays, demonstrating comparable accuracy to doctors in diagnosing 94% of conditions. It highlights potential for efficient diagnosis, particularly in addressing radiologist shortages. X-Raydar has been open-sourced to foster further advancements in AI medical technology.
This paper introduces LiDAR, a metric designed to measure the quality of representations in Joint Embedding (JE) architectures, addressing the challenge of evaluating learned representations. JE architectures have potential for transferable data representations, but evaluating them without access to a task and dataset is difficult. LiDAR aims to facilitate efficient and reliable evaluation.
After months of negotiations, EU lawmakers have reached a deal on the groundbreaking AI Act, introducing strict rules on transparency and ethics for tech companies, creating enforcement mechanisms, and setting up fines for noncompliance. The Act covers regulations on powerful AI models, governance mechanisms, fines for noncompliance, and bans on certain AI uses.
This blog post outlines the capabilities of diffusion models for generating custom data by using additional conditioning. It introduces methods such as Stable Diffusion Inpainting, ControlNet, and GLIGEN, and highlights how fine-tuning with the Low-Rank Optimization technique, or LoRA, can efficiently adapt these methods to specific use cases. The article emphasizes the benefits of enhancing…
The Anticipatory Music Transformer, developed by Stanford scholars, empowers composers with unique control over generative AI music composition. Differentiating itself from other tools, it focuses on symbolic music and incorporates users’ preferences. Integrated with the GPT architecture, it offers more interactive and controllable outputs. Anticipated to revolutionize music composition, it aims to make music creation…
The introduction of Large Language Models (LLMs) has brought attention to Natural Language Processing, Natural Language Generation, and Computer Vision. Researchers from Tsinghua University and Microsoft Research Asia introduced Bridge-TTS, an alternative to noisy prior models, achieving better TTS synthesis than Grad-TTS and FastGrad-TTS while demonstrating improved speed and generation quality. Find out more at…
Audiobox is a new AI model developed by Meta-researchers. It can generate voices and sound effects using voice inputs and natural language text prompts, making it easier to create custom audio for various use cases. It offers unified generation and editing capabilities for speech, sound effects, and soundscapes, revolutionizing the audio creation process.
Reinforcement Learning (RL) maximizes rewards by identifying optimal actions from experiences. It’s applied in fields like autonomous cars and robotics. Existing RL libraries lack features like delayed rewards and secure learning. Meta developed Pearl, addressing these issues, using PyTorch and including policy learning, exploration, safety measures, and efficient data reuse. Pearl outperforms other libraries and…
Meta’s AI image generator “Imagine with Meta AI” has transitioned from a social media feature to a standalone product. Despite its limits with text, the generator delivers high-quality images at 1280×1280 resolution. With a dataset of appealing images, it learns user preferences. However, users should be cautious of copyright concerns and potential legal issues surrounding…
On December 11, 2023, Rakuten announced the launch of its own large language model (LLM) which will enhance internal operations and marketing by 20%. Rakuten also plans to offer this technology to third-party businesses, positioning the firm as a competitor to tech giants like Amazon and Microsoft in the AI space. This move reflects Japan’s…
A Support Vector Machine (SVM) is a versatile supervised learning algorithm used in machine learning for tasks like classification and regression. It creates boundaries between different groups based on their features. SVM includes linear and non-linear models and applies to various fields such as spam email filtering, handwriting recognition, medical diagnosis, and stock market prediction.
Natural Language Processing has recently undergone transformation with the advent of Large Language Models, including GPT series, leading to significant advances in linguistic tasks. Autoregressive pretraining has played a key role in this, fostering a better understanding of language and contributing to computer vision. D-iGPT, developed by Johns Hopkins and UC Santa Cruz researchers, has…
MIT leaders and scholars release policy briefs outlining a framework for U.S. artificial intelligence (AI) governance, aiming to enhance U.S. leadership and limit potential harm. The approach involves extending current regulatory and liability approaches and emphasizes identifying the purpose and intent of AI tools. The project aims to address various regulatory challenges in the AI…
Google has unveiled its Cloud TPU v5p, a powerful tensor processing unit boasting performance-driven design and significant speed improvements over its predecessor. Alongside, the AI Hypercomputer, featuring optimized hardware and open-source software, and the resource management tool Dynamic Workload Scheduler, mark a significant leap in AI processing capabilities. These innovations promise to redefine AI computation.
Researchers from Stanford University and FAIR Meta have introduced CHOIS, a system for generating synchronized 3D human-object interactions based on language descriptions and sparse object waypoints. Leveraging large-scale motion capture datasets, CHOIS advances human motion modeling and demonstrates superior performance in evaluations. The system’s potential for integration into long-term interaction pipelines and future research directions…
A remarkable advancement in competitive programming, AlphaCode 2 is an AI system developed by Google DeepMind, leveraging the powerful Gemini model. It features advanced Large Language Models and a sophisticated search and reranking system tailored for competitive programming, showcasing impressive problem-solving capabilities and outperforming its predecessor. This represents a significant leap in the cooperation between…
Contemporary machine learning relies on foundation models (FMs), often utilizing sequence models, such as the Transformer, which has drawbacks concerning window length and description of material. A new family of models, structured state space sequence models, addresses these issues and has been shown effective in certain domains. Researchers have introduced Mamba, a novel SSM architecture,…
Novel applications of machine learning have been made possible by the emergence of Low-Code and No-Code AI tools and platforms. These tools enable the creation of web services and customer-facing apps with minimal coding expertise. Noteworthy tools include MakeML for machine-learning models, Obviously AI for accurate predictions, and SuperAnnotate for high-throughput data annotation.