OpenAI has cut off a developer who built a device that could respond to ChatGPT queries to aim and fire an automated rifle. The device went viral after a video on Reddit showed its developer reading firing commands aloud, after which a rifle beside him quickly began aiming and firing at nearby walls.
Holding Off The next big thing in the world of artificial intelligence are so-called "AI agents" — models that are capable of interacting with their environment, like a computer desktop, allowing them to autonomously complete tasks without human intervention.
A lawyer for billionaire Elon Musk has asked attorney generals in the states of California and Delaware to push OpenAI to auction a major stake in its business to decide fair value of its charitable asset during its corporate restructuring,
The tech leader and his family denied Ann Altman's accusations in a statement that was posted online just as the lawsuit went public.
Former OpenAI employee Suchir Balaji was found dead from an apparent suicide in November, but his mother is calling foul play.
Indeed, Musk suggested that synthetic data — data generated by AI models themselves — is the path forward. “The only way to supplement [real-world data] is with synthetic data, where the AI creates [training data],” he said. “With synthetic data … [AI] will sort of grade itself and go through this process of self-learning.”
Nvidia, Google and hot startup OpenAI are turning to "synthetic data" factories amid demand for massive amounts of data needed to train artificial intelligence models.
Phi-4 and an rStar-Math paper suggest that compact, specialized models can provide powerful alternatives to the industry’s largest systems.
A new blog post by OpenAI CEO Sam Altman indicates that the AI firm knows how to build AGI as it shifts gears and focus to superintelligence.
OpenAI on Friday outlined plans to revamp its structure, saying it would create a public benefit corporation to make it easier to "raise more capital than we'd imagined," and remove the restrictions imposed on the startup by its current nonprofit parent.
Red teaming has become the go-to technique for iteratively testing AI models to simulate diverse, lethal, unpredictable attacks.
To demonstrate we are still not at human-level intelligence, Chollet notes some of the simple problems in ARC-AGI that o3 can't solve. One such problem involves simply moving a colored square by a given amount -- a pattern that quickly becomes clear to a human.