-
Data poisoning tool helps artists punish AI scrapers
Researchers from the University of Chicago have developed a tool called Nightshade, which can “poison” AI models that use images without consent. It embeds invisible pixels into an image, corrupting the classification of the image and affecting broader concepts. The tool could make AI companies more cautious about using images without permission but also highlights…
-
How Does Retrieval Augmentation Impact Long-Form Question Answering? This AI Study Provides New Insights into How Retrieval Augmentation Impacts Long- Knowledge-Rich Text Generation of Language Models
Researchers from the University of Texas at Austin explored how retrieval augmentation affects the generation of answers for long-form question answering (LFQA) systems. They conducted experiments and found that retrieval enhancement significantly alters the creation of language models (LMs). The quality of attribution in LMs can vary widely, even when given the same set of…
-
UT Austin Researchers Introduce LIBERO: A Lifelong Robot Learning Benchmark to Study Knowledge Transfer in Decision-Making and Robotics at Scale
LIBERO is a lifelong learning benchmark in robot manipulation that focuses on knowledge transfer in declarative and procedural domains. It introduces five key research areas in lifelong learning for decision-making (LLDM) and offers a procedural task generation pipeline with 130 tasks. Experiments reveal the superiority of sequential fine-tuning over existing LLDM methods. The benchmark includes…
-
GPT-4’s multimodal capability makes it vulnerable to attack
OpenAI’s GPT-4 has impressive image processing abilities, but this new capability also opens the model up to attacks. While ChatGPT has guardrails to prevent malicious text prompts, it becomes more susceptible to complying with malicious commands hidden in images. OpenAI has implemented mitigations for adversarial images containing overlaid text, but these efforts may not fully…
-
This new tool could give artists an edge over AI
Nightshade, a new tool developed by a computer science lab at the University of Chicago, may shift the power dynamics between artists and technology companies. By applying Nightshade to their work, artists can trick machine-learning models into malfunctioning by introducing “poisoned pixels.” This tool could help artists protect their work from being scraped by tech…
-
Roman Numeral Analysis with Graph Neural Networks
This article discusses a new method for automating Roman Numeral Analysis using Graph Neural Networks. The model, called ChordGNN, leverages note-wise information to make onset-wise predictions of Roman Numerals in a musical score. The article highlights the architecture of the ChordGNN model and provides examples of its predictions, comparing them with human annotations. The ability…
-
Video Editing Enters a New Age with VideoCrafter: Open Diffusion AI Models for High-Quality Video Generation
VideoCrafter is an open-source video creation and editing suite that uses diffusion models, a machine learning model, to generate photo- and video-realistic outputs from text descriptions. It has not yet been released but has the potential to significantly change the production process by allowing even those with no experience in video editing to create professional-quality…
-
Streamlining Repetitive Tasks During Exploratory Data Analysis
This article discusses automation in data science, particularly in the area of exploratory data analysis (EDA). The author emphasizes the importance of automating repetitive EDA tasks and demonstrates the creation of a utility to automate these tasks. The utility includes features such as summary statistics, statistical tests, correlation heatmap, category averages, and data distribution visualizations.…
-
Oxford’s New AI Tool EVEscape Predicts Virus Variants Before They Emerge
Oxford University and Harvard Medical School have developed an AI tool called EVEscape, which can predict new virus variants before they emerge. This tool could have accurately forecasted COVID-19 mutations if it was available earlier. EVEscape aims to assist in vaccine creation by studying how viruses evolve in response to the human immune system. The…
-
Understanding and Mitigating LLM Hallucinations
Large language models (LLMs) have impressive capabilities in generating response but are also known for generating non-factual statements or hallucinations. Detecting hallucinations is challenging due to the lack of ground truth context. A possible solution, called SELFCHECKGPT, employs a zero-resource black-box hallucination detection method by comparing responses to the same prompt for consistency. The approach…