Large language model
The Branch-Solve-Merge (BSM) program enhances Large Language Models (LLMs) in complex natural language tasks. It includes branching, solving, and merging modules to plan, crack, and combine sub-tasks. Applied to LLMs like Vicuna, LLaMA-2-chat, and GPT-4, BSM boosts human-LLM agreement, reduces biases, increases story coherence, and improves constraint satisfaction. BSM is a promising solution for enhancing…
The latest AlphaFold model exhibits enhanced accuracy and broader coverage beyond proteins, now including other biological molecules and ligands.
Leica has introduced the M11-P, the first digital camera to incorporate a digital watermark that certifies photos as genuine and not AI-generated or manipulated. This move aims to restore trust in digital content, particularly in the field of photojournalism. The camera can add a digital watermark conforming to the Content Credentials standard advocated by the…
President Joe Biden signed an executive order on AI, requiring companies to disclose if their systems could enable dangerous weapons and combat fake videos and news. America aims to lead in AI regulation while enhancing the technology and preventing China from gaining an advantage. The order has received support from big tech companies. However, implementing…
A new AI technique called AnimeInbet has been developed to automate the process of in-betweening line drawings in cartoon animation. Unlike previous methods, AnimeInbet works with geometrized vector graphs instead of raster images, resulting in cleaner and more accurate intermediate frames. The technique involves matching and relocating vertices, preserving intricate line structures, and predicting a…
This week, there has been significant focus on AI. The White House introduced an executive order aimed at promoting safe and trustworthy AI systems, while the G7 agreed on a voluntary code of conduct for AI companies. Additionally, the UK is hosting the AI Safety Summit to establish global rules on AI safety. However, some…
This article explores the environmental impact of generative AI and discusses its potential benefits. It highlights that generative AI can lead to productivity gains and potentially reduce inequality within certain occupations. However, it raises concerns about the environmental cost of generative AI and its impact on overall resource consumption. The article concludes by discussing the…
Researchers from Stanford University, UMass Amherst, and UT Austin have developed a novel family of RLHF algorithms called Contrastive Preference Learning (CPL). CPL uses a regret-based model of preferences, which provides more accurate information on the best course of action. CPL has three advantages over previous methods: it scales well, is completely off-policy, and enables…
Researchers challenge the belief that Vision Transformers (ViTs) outperform Convolutional Neural Networks (ConvNets) with large datasets. They introduce NFNet, a ConvNet architecture pre-trained on the JFT-4B dataset. NFNet performs comparably to ViTs, showing that computational resources are crucial for model performance. The study encourages fair evaluation of different architectures considering performance and computational requirements.
Language models like GPT-3 can generate text based on learned patterns but are neutral and don’t have inherent sentiments or emotions. However, biased training data can result in biased outputs. Sentiment analysis can be challenging with ambiguous or sarcastic text. Misuse can have real-world consequences, so responsible AI usage is important. Researchers at UC Santa…
LLMTime is a method proposed by researchers from CMU and NYU for zero-shot time series forecasting using large language models (LLMs). By encoding time series as text and leveraging pretrained LLMs, LLMTIME achieves high performance without the need for specialized knowledge or extensive training. The technique outperforms purpose-built time series models across various issues and…
ULTRA is a model for learning universal and transferable graph representations for knowledge graphs. It can generalize to any KG with different entity and relation vocabularies, and it outperforms specialized baselines in link prediction experiments. ULTRA’s performance is enhanced through pre-training and fine-tuning, making it suitable for inductive and transferable KG reasoning. Future work includes…
President Joe Biden has issued a comprehensive executive order on AI governance aimed at ensuring transparency and standardization in the industry. The order emphasizes the need for clear content labeling and watermarking practices and includes requirements for AI developers to share safety test results with the US government. Critics have noted the lack of enforcement…
Researchers from Apple and Carnegie Mellon University have developed a benchmark called TIC-DataComp to train foundation models like OpenAI’s CLIP models continuously. They found that starting training at the most recent checkpoint and replaying historical data delivers performance on par with an Oracle while being 2.7 times more computationally efficient. The findings highlight the need…
Despite some progress in the SAG-AFTRA strike negotiations, unresolved issues remain, including the use of AI in recreating performers’ likeness and revenue sharing with streaming platforms. The strike has continued for 109 days, with uncertainty surrounding its end date. Negotiations between SAG-AFTRA and industry producers are ongoing. The Writer’s Guild of America has already secured…
ChatGPT has shown impressive performance in various disciplines, but it struggles with math. While it has performed well in exams like medical and law schools, it falls short in accounting. A study conducted by Professor David Wood revealed that ChatGPT scored 47.4% on accounting exams, significantly lower than humans’ average score of 76.7%. LLMs like…
Amazon Bedrock is a managed service by AWS that provides access to foundation models (FMs) and tools for customization. It allows developers to build generative AI applications using FMs through an API, without infrastructure management. To ensure data privacy, customers can establish a private connection between their VPC and Amazon Bedrock using VPC endpoints powered…
The Amazon SageMaker JumpStart SDK has been simplified for building, training, and deploying foundation models. The code for prediction is now easier to use. This post demonstrates how to get started with using foundation models using the simplified SageMaker Jumpstart SDK in just a few lines of code. You can find more information about the…
Knowledge graphs, like the Financial Dynamic Knowledge Graph (FinDKG) and the Knowledge Graph Transformer (KGTransformer), are valuable tools for enhancing AI systems. These graphs capture interconnected facts and temporal dynamics, allowing for better understanding and analysis. The FinDKG, created from financial news, can be used for risk monitoring and investing. The KGTransformer model outperforms other…
Pixis, a fast-growing AI company, is striving to democratize AI for the growth marketing sector. They are focused on creating products that require zero technical expertise, allowing marketers to directly leverage the potential of AI. Pixis has simplified the implementation process, reduced integration times, and prioritized transparency and data privacy compliance. They believe that AI…