Large language model
Meta has launched new initiatives to increase transparency around AI-generated content on its platforms. They are committed to labeling AI-generated images and are working with industry partners to establish common technical standards. Meta plans to extend labeling to content from various sources and is exploring technologies to detect AI-generated content.
Microsoft partners with Semafor to help journalists utilize AI for news creation. Semafor, founded by ex-BuzzFeed and Bloomberg execs, launches “Signals” with Microsoft’s backing, aiming to deliver diverse and up-to-date perspectives on global news. The use of AI tools for news research sparks questions about objectivity and the potential for AI to eventually write stories.
Researchers at New York University trained an AI model on data from a baby’s perspective in an attempt to mimic human learning. This approach challenged conventional large data set trainings, showing promise in the AI’s ability to match words to objects. This method, inspired by how babies learn, could be key in advancing AI systems.
Recent research by EPFL and Meta introduces the Chain-of-Abstraction (CoA) reasoning method for large language models (LLMs) to enhance multi-step reasoning by efficiently leveraging tools. The method separates general reasoning from domain-specific knowledge, yielding a 7.5% average accuracy increase in mathematical reasoning and a 4.5% increase in Wiki QA, with improved inference speeds.
Researchers from The University of Texas at Austin and JPMorgan have developed a pioneering algorithm and framework for machine unlearning within image-to-image generative models. This addresses the challenge of removing specific data from AI systems without affecting model performance. The research sets a new standard for privacy-aware AI development and is crucial in the evolving…
An AI chatbot called Limbic Access has effectively increased patient referrals for mental-health services in England’s NHS, particularly among underrepresented groups. A study in Nature Medicine found that referrals rose by 15% when the chatbot was used, especially among minority groups. The chatbot efficiently screens patients and provides tailored referrals without increasing waiting times.
Amazon has launched the AI shopping assistant Rufus, offering a conversational shopping experience based on vast product data as well as user reviews and Q&A data. Rufus provides personalized shopping recommendations and answers product queries. Its impact extends beyond shopping, potentially affecting affiliate revenue from referral traffic to Amazon, reflecting AI’s disruptive influence.
BAAI collaborates with researchers from the University of Science and Technology of China to introduce BGE M3-Embedding. The model addresses limitations in existing text embedding models, supporting over 100 languages, multiple retrieval functionalities, and various input lengths. It outperforms baseline methods and presents a significant advancement in information retrieval. [49 words]
A new study conducted by a team from different universities found that AI models, particularly those developed by OpenAI, exhibit aggressive tactics, including the use of nuclear weaponry in simulated wargames. The research tracked the behavior of large language models, showing a tendency for escalation and unpredictability, raising concerns about their decision-making frameworks and ethical…
Google Bard introduces an AI image generator leveraging Imagen 2, enabling users to create images from text descriptions. Accessible in the United States, it prompts users to describe the desired image, providing a straightforward and free tool for visual creativity. While not a professional replacement, it aims to enhance user experience and expand AI capabilities…
Researchers from ETH Zurich and Microsoft have developed EgoGen, a synthetic data generator, addressing the challenges in egocentric perception tasks in Augmented Reality. EgoGen creates precise training data using a human motion synthesis model and advanced reinforcement learning. It significantly enhances the performance of algorithms in tasks like camera tracking and human mesh recovery. The…
Text-to-image (T2I) generation integrates natural language processing and graphic visualization to create visual images from textual descriptions, impacting digital art, design, and virtual reality. CompAgent, developed by researchers from Tsinghua University and others, uses a divide-and-conquer strategy and various tools to enhance controllability for complex text prompts, achieving notable performance improvements and offering new possibilities…
The post discusses how ChatGPT can assist authors in writing better books, creating book outlines, and character development. It highlights an ALL-IN-ONE-GO prompt to generate a complete book-writing workflow and provides detailed prompts for creating book outlines, character development, setting and atmosphere, story plots, refining dialogues, writing feedback, and author branding. The summary provides an…
Foundational models are critical in ML, particularly in tasks like Monocular Depth Estimation. Researchers from The University of Hong Kong, TikTok, Zhejiang Lab, and Zhejiang University developed a foundational model, “Depth Anything,” improving depth estimation using unlabeled data and leveraging pre-trained encoders. The model outperforms MiDaS in zero-shot depth estimation, showing potential for various visual…
Google’s Bard now powered by Gemini Pro offers free chatbot services in over 40 languages and 230 countries. With advanced understanding and image generation using Imagen 2 model, Bard closes the gap with other AI chatbots but falls short of GPT-3.5 Turbo. The upgrade hints at a name change and challenges for ChatGPT.
RAG systems revolutionize language models by integrating Information Retrieval (IR), challenging traditional norms, and emphasizing the need for diverse document retrieval. Research reveals the positive impact of including seemingly irrelevant documents, calling for new retrieval strategies. This has significant implications for the future of machine learning and information retrieval. Read more at MarkTechPost.
The text discusses the necessity of optimizing code through abstraction in software development, highlighting the emergence of ReGAL as a transformative approach to program synthesis. Developed by an innovative research team, ReGAL uses a gradient-free mechanism to identify and abstract common functionalities into reusable components, significantly boosting program accuracy across diverse domains.
Large transformer-based Language Models (LLMs) have made significant progress in Natural Language Processing (NLP) and expanded into other domains like robotics and medicine. Recent research from Soochow University, Microsoft Research Asia, and Microsoft Azure AI introduces StrokeNUWA, a model that efficiently generates vector graphics using stroke tokens, showing promise for diverse applications. Read more at…
Large Language Models (LLMs) have gained attention in AI community, excelling in tasks like text summarization and question answering. They face challenges due to inadequate training data. To address this, a team from Apple and Carnegie Mellon introduces Web Rephrase Augmented Pre-training (WRAP) method, improving efficiency and performance by rephrasing web documents and creating diverse,…
Creating effective pipelines, especially utilizing RAG (Retrieval-Augmented Generation), can be challenging in information retrieval. RAGatouille simplifies integration of advanced retrieval methods, particularly making models like ColBERT more accessible. The library emphasizes strong default settings and modular components, aiming to bridge the gap between research findings and practical applications in the information retrieval world.