AI — weekly megathread!
AI — weekly megathread!

AI — weekly megathread!

News provided by aibrews.com

  1. Meta AI introduces:
    1. Emu Video: new text-to-video model that leverages Meta’s Emu image generation model and can respond to text-only, image-only or combined text & image inputs to generate high quality video [Details].
    2. Emu Edit: This new model is capable of free-form editing through text instructions. Emu Edit precisely follows instructions, ensuring that pixels in the input image unrelated to the instructions remain untouched [Details].
  2. Researchers present LLaVA-Plus, a general-purpose multimodal assistant that expands the capabilities of large multimodal models. LLaVA-Plus maintains a skill repository that contains a wide range of vision and vision-language pre-trained models (tools), and is able to activate relevant tools, given users’ multimodal inputs, for performing real-world tasks [Details].
  3. Google Deepmind in collaboration with YouTube announce [Details]:
    1. Lyria, a model that excels at generating high-quality music with instrumentals and vocals, performing transformation and continuation tasks, and giving users more nuanced control of the output’s style and performance.
    2. Dream Track: an experiment in YouTube Shorts. Users can simply enter a topic and choose an artist from the carousel to generate a 30 second soundtrack for their Short. Using the Lyria model, Dream Track simultaneously generates the lyrics, backing track, and AI-generated voice in the style of the participating artist selected.
    3. Music AI tools: Users can create new music or instrumental sections from scratch, transform audio from one music style or instrument to another, and create instrumental and vocal accompaniments. Louis Bell, Producer/Songwriter, builds a track with just a hum [video]:
  4. SiloGen announced Poro, an open-source 34 billion parameter LLM for English, Finnish and code. Future releases to support other European languages. Poro is freely available for both commercial and research use [Details].
  5. Meta AI released new stereo models for MusicGen. By extending the delay codebook pattern to cover tokens from both left & right channels, these models can generate stereo output with no extra computational cost vs previous models [Hugging face |Paper ].
  6. Alibaba Cloud introduced Qwen-Audio, an open-source multi-task audio-language model that supports various tasks, languages, and audio types, serving as a universal audio understanding model [Details | Demo].
  7. Researchers present JARVIS-1, an open-world agent that can perceive multimodal input (visual observations and human instructions), generate sophisticated plans, and perform embodied control in Minecraft [Details].
  8. Microsoft announced:
    1. Microsoft Copilot Studio: a low-code tool to quickly build, test, and publish standalone copilots and custom GPTs [Details].
    2. Windows AI Studio to enable developers to fine-tune, customize and deploy state-of-the-art small language models, for local use in their Windows apps. In the coming weeks developers can access Windows AI Studio as a VS Code Extension [Details].
    3. Microsoft Azure Maia: Custom-designed chip optimized for large language models training and inference [Details].
    4. Text to speech avatar feature in Azure AI Speech to create synthetic videos of a 2D photorealistic avatar speaking [Details].
    5. The addition of 40 new models to the Azure AI model catalog including Mistral, Phi, Jais, Code Llama, NVIDIA Nemotron [Details].
  9. Redwood Research, a research lab for AI alignment, has unveiled that large language models (LLMs) can master “encoded reasoning,” a form of steganography. This allows LLMs to subtly embed intermediate reasoning steps within their generated text in a way that is undecipherable to human reader [Details].
  10. Microsoft Research introduced phi-2 - at 2.7B size, phi-2 is much more robust than phi-1.5 with improved reasoning capabilities [Details].
  11. Forward Health announced CarePods, a self-contained, AI-powered doctor’s office. CarePod users can get their blood drawn, throat swabbed and blood pressure read, all without a doctor or nurse. Custom AI powers the diagnosis, and behind the scenes, doctors write the appropriate prescription [Details].
  12. You.com launched YOU API to connect LLMs to the web. The API is launching with three dedicated endpoints: Web Search, News and RAG [Details].
  13. Notion announced Q&A, an AI assistant that provides answers using information from a Notion workspace [Details].
  14. OpenAI has paused new ChatGPT Plus sign-ups due to the surge in usage post devday [Link].
  15. Together.ai announced Together Inference Engine that up to 2x faster than other serverless APIs (eg: Perplexity, Anyscale, Fireworks AI, or Mosaic ML [Details].
  16. Researchers in China have developed an AI-powered robot chemist that might be able to extract oxygen from water on Mars. The robot uses materials found on the red planet to produce catalysts that break down water, releasing oxygen [Details].
  17. Nvidia announced H200 GPU that features 141GB of memory at 4.8 terabytes per second, nearly double the capacity and 2.4x more bandwidth compared with its predecessor, the NVIDIA A100 [Details].

🔦 Weekly Spotlight

  1. Retool’s 2023 report on State of AI in production which surveyed 1,500+ tech people [Link].
  2. Exploring GPTs: ChatGPT in a trench coat? by Simon Willison [Link].
  3. draw-a-ui: an open-source app that uses tldraw and the gpt-4-vision API to generate html based on a wireframe you draw [Link].

- - -

Welcome to the r/artificial weekly megathread. This is where you can discuss Artificial Intelligence - talk about new models, recent news, ask questions, make predictions, and chat other related topics.

Click here for discussion starters for this thread or for a separate post.

Self-promo is allowed in these weekly discussions. If you want to make a separate post, please read and go by the rules or you will be banned.

Previous Megathreads & Subreddit revamp and going forward

submitted by /u/jaketocake
[link] [comments]