-
A glimpse of the next generation of AlphaFold
The latest AlphaFold model exhibits enhanced accuracy and broader coverage beyond proteins, now including other biological molecules and ligands.
-
Leica unveils anti-AI camera to fight deepfakes
Leica has introduced the M11-P, the first digital camera to incorporate a digital watermark that certifies photos as genuine and not AI-generated or manipulated. This move aims to restore trust in digital content, particularly in the field of photojournalism. The camera can add a digital watermark conforming to the Content Credentials standard advocated by the…
-
Biden Takes First Step to Regulate Artificial Intelligence with Executive Order
President Joe Biden signed an executive order on AI, requiring companies to disclose if their systems could enable dangerous weapons and combat fake videos and news. America aims to lead in AI regulation while enhancing the technology and preventing China from gaining an advantage. The order has received support from big tech companies. However, implementing…
-
Shedding Light on Cartoon Animation’s Future: AnimeInbet’s Innovation in Line Drawing Inbetweening
A new AI technique called AnimeInbet has been developed to automate the process of in-betweening line drawings in cartoon animation. Unlike previous methods, AnimeInbet works with geometrized vector graphs instead of raster images, resulting in cleaner and more accurate intermediate frames. The technique involves matching and relocating vertices, preserving intricate line structures, and predicting a…
-
People shouldn’t pay such a high price for calling out AI harms
This week, there has been significant focus on AI. The White House introduced an executive order aimed at promoting safe and trustworthy AI systems, while the G7 agreed on a voluntary code of conduct for AI companies. Additionally, the UK is hosting the AI Safety Summit to establish global rules on AI safety. However, some…
-
Is Generative AI Worth Its Environmental Footprint?
This article explores the environmental impact of generative AI and discusses its potential benefits. It highlights that generative AI can lead to productivity gains and potentially reduce inequality within certain occupations. However, it raises concerns about the environmental cost of generative AI and its impact on overall resource consumption. The article concludes by discussing the…
-
Stanford and UT Austin Researchers Propose Contrastive Preference Learning (CPL): A Simple Reinforcement Learning RL-Free Method for RLHF that Works with Arbitrary MDPs and off-Policy Data
Researchers from Stanford University, UMass Amherst, and UT Austin have developed a novel family of RLHF algorithms called Contrastive Preference Learning (CPL). CPL uses a regret-based model of preferences, which provides more accurate information on the best course of action. CPL has three advantages over previous methods: it scales well, is completely off-policy, and enables…
-
Is ConvNet Making a Comeback? Unraveling Their Performance on Web-Scale Datasets and Matching Vision Transformers
Researchers challenge the belief that Vision Transformers (ViTs) outperform Convolutional Neural Networks (ConvNets) with large datasets. They introduce NFNet, a ConvNet architecture pre-trained on the JFT-4B dataset. NFNet performs comparably to ViTs, showing that computational resources are crucial for model performance. The study encourages fair evaluation of different architectures considering performance and computational requirements.
-
How Effective are Self-Explanations from Large Language Models like ChatGPT in Sentiment Analysis? A Deep Dive into Performance, Cost, and Interpretability
Language models like GPT-3 can generate text based on learned patterns but are neutral and don’t have inherent sentiments or emotions. However, biased training data can result in biased outputs. Sentiment analysis can be challenging with ambiguous or sarcastic text. Misuse can have real-world consequences, so responsible AI usage is important. Researchers at UC Santa…
-
Researchers from CMU and NYU Propose LLMTime: An Artificial Intelligence Method for Zero-Shot Time Series Forecasting with Large Language Models (LLMs)
LLMTime is a method proposed by researchers from CMU and NYU for zero-shot time series forecasting using large language models (LLMs). By encoding time series as text and leveraging pretrained LLMs, LLMTIME achieves high performance without the need for specialized knowledge or extensive training. The technique outperforms purpose-built time series models across various issues and…