• Daily AI Brief
  • Posts
  • $6 Billion for xAI's GPU's, Google Docs' AI Update & ESPN's AI Avatar

$6 Billion for xAI's GPU's, Google Docs' AI Update & ESPN's AI Avatar

Hi all,

Another day full of AI news, let’s dive in!

Elon Musk's artificial intelligence company, xAI, is in the midst of securing up to $6 billion in funding, valuing the company at $50 billion. The funding round, expected to conclude soon, involves significant investment from Middle Eastern sovereign funds and other investors. These funds will be directed towards acquiring 100,000 Nvidia chips for a data center in Memphis, enhancing Tesla’s Full Self-Driving technology. xAI, which launched the Grok chatbot to rival AI giants like OpenAI and Google, also sees Musk collaborating with President-elect Donald Trump's administration on AI policy. Read more

ESPN is experimenting with a new AI avatar named 'FACTS' in collaboration with SEC Nation. Designed to provide additional insights to anchors during pregame discussions, FACTS is still under development with no set launch date. This initiative is part of ESPN's broader strategy to incorporate artificial intelligence into its programming, including an ambitious plan to launch an 'AI-driven SportsCenter' on its upcoming streaming platform. The move sparks curiosity about how sports fans, who traditionally favor human analysis, will respond to this technological integration. Read more

Bluesky, a rising social network, has declared it will not use user posts for AI training, unlike X, formerly known as Twitter, which recently updated its terms to permit such practices. Despite leveraging AI for moderation and content discovery, Bluesky assures users it refrains from employing generative AI trained on user content. This announcement comes amidst a significant increase in Bluesky's user base, as political shifts drive users away from X. Read more

Google has enhanced its Google Docs platform by integrating the AI tool Gemini, allowing Workspace subscribers to create custom images directly within their documents. Powered by the Imagen 3 model, this feature offers users the ability to generate photorealistic images in various styles and formats using a simple text prompt. Currently, this functionality is exclusive to certain Google Workspace plans, with no information on a wider release. Read more

Nexa AI has unveiled OmniVision-968M, a breakthrough Vision Language Model designed for edge devices. By reducing the number of image tokens from 729 to 81, the model significantly improves efficiency and reduces computational demands. This innovation is set to enhance AI interactions in industries like healthcare and smart cities, while maintaining or improving accuracy. Read more

Cruise, the autonomous vehicle division of General Motors, has admitted to submitting a false report to influence a federal investigation, resulting in a $500,000 fine. The company failed to disclose a significant incident where one of its robotaxis dragged a pedestrian in San Francisco. Amidst ongoing investigations by the NHTSA and SEC, Cruise has seen executive resignations, staff reductions, and has resumed supervised driving tests. Read more

AI expert Max Tegmark suggests that Elon Musk could influence the establishment of AI safety standards in the US, potentially easing the AI arms race between the US and China. Tegmark warns of the dangers of losing control over artificial general intelligence and advocates for global AI safety standards to prevent catastrophic outcomes for humanity. Read more

Elon Musk has escalated his legal battle against OpenAI, adding Microsoft and Reid Hoffman as defendants, accusing them of creating an AI monopoly through anticompetitive practices. Musk claims that OpenAI's 'lavish compensation' strategy is used to attract top-tier AI talent, with reports indicating significant spending on personnel. This lawsuit comes after Musk's departure from OpenAI's board in 2018 and his ongoing criticism of the company he helped establish. Read more

The U.S. Courts Advisory Committee has proposed amendments to the Federal Rules of Evidence to address the growing use of AI in legal proceedings. These changes focus on ensuring the authenticity and reliability of AI-generated evidence through new rules that require detailed information about AI systems and their outputs. The proposed Rule 707 aims to align AI evidence standards with those for expert testimony, addressing concerns about bias and inaccuracy. Read more

The Pentagon is making substantial investments in artificial intelligence, with a projected budget of $1.8 billion for AI and machine learning by 2025. Collaborations with AI developers like Anthropic, Palantir, and AWS are aimed at boosting data processing capabilities for the U.S. Department of Defense. Despite advancements, concerns remain about the reliability of AI systems, emphasizing the need for human oversight in military operations. Read more

Nevada's government is set to implement an AI-powered system to streamline the processing of unemployment applications. While the technology, developed by Google at a cost of one million dollars, promises to expedite decisions, human oversight will play a crucial role in the final approval process. The initiative has sparked discussions about the balance between AI efficiency and the necessity for thorough human review. Read more

In a significant move, U.S. President Joe Biden and Chinese President Xi Jinping have concurred that decision-making regarding nuclear weapons should remain under human control, as opposed to being managed by artificial intelligence. This agreement comes amid the U.S.'s concerns over China's expanding nuclear arsenal and represents a crucial step in addressing the intertwined challenges of nuclear arms and artificial intelligence. However, the impact of this accord on future diplomatic discussions remains to be seen. Read more

The European Union Aviation Safety Agency (EASA) is focusing on integrating artificial intelligence to boost aviation safety. During its Annual Safety Conference in Budapest, EASA delved into innovations like smart cockpits and discussed the role of AI in mitigating risks such as GNSS interference and runway incursions, while emphasizing the enduring importance of the human element in aviation. Read more

A study by researchers from MIT, Harvard, and Cornell reveals that large language models like GPT-4 and Anthropic's Claude 3 Opus fail to create accurate world models. Despite their ability to handle language tasks, these models falter in real-world scenarios such as providing accurate driving directions, especially when unexpected changes occur. This suggests the need for developing new approaches to enhance their reliability in dynamic environments. Read more

The European Union has released its initial draft of regulatory guidelines for general-purpose AI models as part of the AI Act. The draft outlines transparency, copyright, risk assessment, and mitigation protocols targeting major AI companies like OpenAI and Google. Stakeholders have until November 28 to provide feedback, with a final version anticipated by May 2025. Read more

Gendo, an AI startup founded in 2022 by George Proud and Will Jones, has raised €5.1 million to advance its generative AI software for architects. The platform, known for transforming sketches and text into hyper-realistic 3D designs, promises faster performance compared to traditional methods. With notable clients like Zaha Hadid Architects and KPF, Gendo plans to use the funds to expand its AI capabilities and create specialized solutions for major design firms. The funding round saw participation from PT1, LEA Partners, Concept Ventures, and Koro Capital. Read more

Virgin Media O2 has launched 'Daisy,' an AI bot designed to thwart scam callers by imitating the voice of an elderly woman, thus occupying potential scammers and decreasing their chances of reaching real victims. Developed in collaboration with YouTuber Jim Browning, Daisy uses live call transcriptions to generate responses in a human-like voice. While not available to the public, Daisy serves as a deterrent and educational tool against phone scams, with O2 offering services to report suspicious calls and texts. Read more

The Software Alliance, representing major technology companies like OpenAI and Microsoft, has urged President-elect Trump to champion U.S. leadership in AI through balanced regulations. In a letter, they advised a review of current policies that may stifle AI innovation and recommended maintaining some Biden-era policies while updating risk management strategies. The Alliance stressed the importance of responsible AI deployment, especially in sectors like credit, housing, and employment. The letter also encouraged Trump to participate in global AI standard discussions and collaborate with Congress to tackle AI risks. Amid Trump's focus on deregulation and a new government efficiency department, the Alliance emphasized the need for continued investment in AI R&D and addressed privacy and cybersecurity concerns. Read more

Otto Barten, director of the Existential Risk Observatory, has introduced a Conditional AI Safety Treaty aimed at curbing the development of potentially uncontrollable artificial intelligence. The proposal calls for international cooperation, particularly involving the U.S. and China, to halt unsafe AI training at specific capability thresholds. It also suggests the establishment of AI Safety Institutes to assess risks, while allowing the development of current AI technologies under strict safety conditions. This initiative comes with backing from notable experts like Geoffrey Hinton and Yoshua Bengio, yet faces geopolitical hurdles in achieving global consensus. Read more

Researchers have unveiled 'Face Anonymization Made Simple,' a new AI program designed to anonymize faces in images without altering the image's overall quality. Utilizing a diffusion model, the AI modifies key facial features to prevent recognition, offering a more straightforward and effective solution compared to previous methods. The open-source tool, based on Stable Diffusion, requires no facial key points or masks and is available on GitHub, presenting opportunities for various facial image processing applications amid rising privacy concerns. Read more