Large language model
AI’s effectiveness heavily relies on data availability for training purposes. However, a study by University of Toronto Engineering researchers suggests that deep learning models may not always require a lot of training data. The researchers found that smaller subsets of data can be used to train models without compromising accuracy. The study emphasizes the significance…
Microsoft has officially announced its in-house designed chips, the Azure Maia 100 AI accelerator and Azure Cobalt CPU, at the Ignite conference. These chips demonstrate Microsoft’s commitment to innovation and self-sufficiency across hardware and software. They are set to power Azure’s AI workloads and will be integrated into specially designed server motherboards and racks. Microsoft…
GOAT is a universal navigation system developed by researchers from various universities and organizations. It operates autonomously in home and warehouse environments, using category labels, target images, and language descriptions to interpret goals. GOAT creates a 3D semantic voxel map for accurate object detection and memory storage, and it has demonstrated superior performance in reaching…
Researchers from MIT and IAIFI have developed a framework called Feature Fields for Robotic Manipulation (F3RM), which addresses the challenge of enabling robots to manipulate objects in cluttered environments. F3RM leverages distilled feature fields to combine 3D geometry with semantic information from 2D models, bridging the gap between 2D image features and 3D geometry. The…
UrbanGIRAFFE, a new approach by researchers from Zhejiang University, addresses the challenges in generating urban scenes for camera viewpoint control and scene editing. By breaking down the scene into stuff, objects, and sky, the model allows for diverse controllability, including large camera movements and object manipulation. UrbanGIRAFFE outperforms existing methods and offers remarkable versatility for…
Researchers from the University of Washington and Microsoft have developed noise-canceling headphones with semantic hearing capabilities, enabled by advanced machine learning algorithms. These headphones allow users to selectively choose the sounds they want to hear while blocking out other distractions. The innovation relies on a smartphone’s powerful neural network for sound processing and has the…
MIT researchers have developed MechGPT, a novel model for extracting insights from scientific texts in the field of materials science. MechGPT employs a two-step process using a general-purpose language model to generate question-answer pairs and enhance clarity. The model is trained using PyTorch and the Hugging Face ecosystem, with additional techniques such as Low-Rank Adaptation…
Researchers at NVIDIA have introduced a GPU-accelerated Weighted Finite State Transducer (WFST) beam search decoder that improves the performance of Automated Speech Recognition (ASR) systems. The decoder enhances efficiency, reduces latency, and supports advanced features like on-the-fly composition for word boosting. In offline testing, the GPU-accelerated decoder showed seven times higher throughput compared to the…
Tech giant Meta has disbanded its Responsible AI (RAI) team, as part of a strategic shift towards generative artificial intelligence. The RAI team, established in 2019, focused on ethical development and accountability in AI. Most members have been assimilated into Meta’s generative AI product team, while others now work on the company’s AI infrastructure. Despite…
Meta AI researchers have introduced two groundbreaking advancements in the field of generative AI: Emu Video and Emu Edit. Emu Video streamlines the process of text-to-video generation, setting a new standard for high-quality video generation. Emu Edit is a multi-task image editing model that redefines instruction-based image manipulation, offering precise control and adaptability. These innovations…
Large Language Models (LLMs) excel in various natural language tasks but struggle with goal-directed conversations. UC Berkeley researchers propose adapting LLMs using reinforcement learning (RL) to improve goal-directed dialogues. They introduce an imagination engine (IE) to generate diverse synthetic data and use an offline RL approach to reduce computational costs. Their method consistently outperforms traditional…
Tarsier is an open-source Python library created by Reworkd to facilitate web interaction with multi-modal Language Models (LLMs) like GPT-4. It visually tags interactable elements on web pages, enhancing the capabilities of these models. Tarsier simplifies web interaction for LLMs by visually tagging elements using brackets and unique identifiers. It also offers OCR utilities to…
Coral reefs are home to diverse marine life and provide important environmental and economic benefits. However, they are susceptible to bleaching due to rising water temperatures caused by global warming. Bleaching leads to environmental and economic problems, including increased CO2 levels and difficulty for other marine life to form skeletons. Researchers from Chosun University are…
Latent Diffusion Models are generative models used in machine learning to capture a dataset’s underlying structure. Researchers at Tsinghua University have introduced LCM-LoRA, a training-free acceleration module that enhances the image generation process. By integrating LCM-LoRA parameters with LoRA parameters, high-fidelity images can be generated efficiently and with minimal sampling steps. This approach revolutionizes text-to-image…
Palo Alto Networks has launched the Cortex XSIAM 2.0 platform, which includes a bring-your-own-machine-learning (BYOML) framework. This framework allows security teams to create and implement their machine-learning models tailored to their specific needs, enhancing security measures against evolving threats. The platform also features the XSIAM Command Center for efficient incident response and the MITRE ATT&CK…
Researchers from Vanderbilt University and UC Davis have introduced a framework called PRANC, which reparameterizes deep models as a linear combination of randomly initialized and frozen models. PRANC enables significant compression of deep models, addressing challenges in storage and communication. It outperforms existing methods, including traditional codecs and learning-based approaches, in image compression. The study…
Large language models (LLMs) have impressive few-shot learning capabilities, but they still struggle with complex reasoning in chaotic contexts. This article proposes a technique that combines Thread-of-Thought (ToT) prompting with a Retrieval Augmented Generation (RAG) framework to enhance LLMs’ understanding and problem-solving abilities. The RAG system accesses multiple knowledge graphs in parallel, improving efficiency and…
This article provides a beginner’s guide to writing AI agents for games. It can help you get started and create game-winning agents.
This text discusses a customized copilot used to streamline research and development for a type of artificial neural network known as PINN. The copilot assists in improving efficiency and productivity in the development process.
Researchers from Duke and Johns Hopkins Universities have developed an approach called SneakyPrompt that bypasses safety filters in generative AI models like Stable Diffusion and DALL-E to generate explicit or violent images. By replacing banned words with semantically similar ones, the researchers were able to trick the models into generating the desired images. To prevent…