Large language model
This paper, accepted for NeurIPS 2023’s Diffusion Models workshop, discusses the challenges in adapting score-based generative models to various data domains and proposes a solution using a functional view of data for a unified representation and reformulated score function.
A study reveals that artificial intelligence systems, used in areas like self-driving cars and medical imaging, are more susceptible to deliberate attacks that can trigger incorrect decisions than previously understood.
The study presented at NeurIPS 2023’s Generative AI and Biology workshop focuses on converting 2D molecular structures into 3D conformations using a novel, scalable diffusion model on Riemannian Manifolds, achieving competitive results without assuming molecule structure.
Retraining customer churn prediction models is vital but challenging, especially when distinguishing the effects of interventions on customer behavior. Control groups, feedback surveys, and uplift modeling can address these biases, enabling more accurate predictions and focused retention strategies. Continual refinement and adaptation are key to future success.
A new integer-to-string conversion algorithm, called “LR printer,” outperforms the optimized standard algorithm by 25-38% for 32-bit and 40-58% for 64-bit integers. It’s beneficial for applications that generate large text files with numerous integers, affecting performance notably in data-heavy fields like Data Science and Machine Learning. The C++ implementation is available on GitHub.
The paper, presented at the NeurIPS 2023 ICBINB workshop, examines the use of pre-trained language models in text-to-image auto-regressive generation, finding them of limited utility and providing a twofold analysis related to cross-modality tokens.
Google researchers identified a method to retrieve parts of OpenAI’s ChatGPT training data by prompting repeated words, revealing sensitive information. Investing $200, they extracted over 10,000 examples. The findings raise security and privacy concerns amidst lawsuits accusing OpenAI of misusing private data for ChatGPT training.
Yann LeCun, Meta AI’s chief and deep learning pioneer, has expressed skepticism about the near-term development of artificial general intelligence (AGI) and quantum computing’s role in AI. He contrasts industry leaders by downplaying imminent AGI breakthroughs and doubts AI will match human intelligence soon. He also emphasizes the need for multimodal AI systems and democratizing…
The paper presents a study on using conditional generation from diffusion models for tasks in music production, such as audio continuation, inpainting, and regeneration, creating transitions between tracks, and transferring styles, by applying guidance during the sampling process at 44.1kHz stereo audio quality.
This article examines public transport systems in Budapest, Berlin, Stockholm, and Toronto using GTFS data and data science tools to analyze and visualize public transport patterns and insights for urban planning. The author addresses GTFS’s universality, noting city-specific manual validations, and explores topics like stop locations, departure times, spatial distributions, transport modes, and route shapes…
The Metal.jl Framework provides Julia users on macOS the ability to utilize the GPU for better performance in scientific computing and machine learning. It tackles macOS’s transition to M-series chips, offering solutions amidst compatibility challenges. Users can harness the GPU’s parallel processing via Metal.jl for tasks like matrix multiplication and machine learning with Flux, improving…
This article lists over 15 AI tools for developers as of December 2023, highlighting their key features. These tools assist in coding, debugging, generating documentation, managing snippets, creating AI agents, designing visuals, and more. They include GitHub Copilot, Amazon CodeWhisperer, Notion AI, Stepsize, Mintlify, Pieces for Developers, LangChain, You.com, AgentGPT, Jam.dev, Durable, Leap AI, AssemblyAI,…
To transition to data analytics from another field, pursue relevant education or training, gain practical experience, and engage with the data science community through platforms like Towards Data Science.
Getir, established in 2015, is a leading ultrafast grocery delivery company with a multinational presence. Utilizing Amazon SageMaker and AWS Batch, they reduced model training time by 90% and improved operational efficiency. Their data science team developed a product category prediction pipeline with an 80% accuracy rate, aiding commercial teams in inventory management and competitive…
Researchers discovered that language models like GPT-3.5 Turbo could inadvertently reveal their training data when prompted to repeat simple words, leaking sensitive content, personal information, and copyrighted material. The technique, known as a divergence attack, had a success rate of 3% and poses a significant security risk. Companies have been notified, with the web version…
Pika Labs, an AI video generator startup, has caused a stir with its product, Pika 1.0, leading to a stock increase for Sunyard Technology, a firm with familial ties to co-founder Demi Guo. The startup raised $55 million and aims to democratize video creation, despite broader industry challenges.
Developed by an international research team, PepCNN is a deep learning model that predicts protein-peptide binding with higher accuracy than previous tools. Using structural, sequence, and language model features, it excels in specificity, precision, and AUC metrics for better drug discovery and understanding protein-peptide interactions. Further improvements are planned using DeepInsight technology.
NeRF models scenes in 3D and learns from various viewpoints to create photorealistic images. Researchers from Sungkyunkwan University improved efficiency with a mask strategy, reducing memory requirements and increasing speed. Point-based rendering enhancements and ongoing research promise to further advance realistic 3D applications. Credit goes to the researchers and is shared via various online AI…
Researchers released MediTron, an open-source medical LLM suite with 7B and 70B parameter variants, excelling in benchmarks and tailored for tasks like medical QA. It uses an extensive medical dataset for training but requires further testing before clinical deployment to ensure safety.
Microsoft researchers developed MAIRA-1, a model combining a chest X-ray-specific image encoder with a fine-tuned language model to generate accurate radiology reports. It leverages data augmentation and evaluation metrics tailored to clinical relevance to improve report quality. Future enhancements may include incorporating study histories to reduce inaccuracies.