Meta launches latest chip for AI workloads
The second version of the Meta Training and Inference Accelerator (MTIA) highlights how cloud hyperscale players are creating their own processors for large language model (LLM) training and inferencing.
The second version of the Meta Training and Inference Accelerator (MTIA) highlights how cloud hyperscale players are creating their own processors for large language model (LLM) training and inferencing.
It's easy to conclude that generative AI is going to take jobs from humans. But there's another argument that genAI will be needed just to maintain and improve productivity levels because there will be fewer workers. There’s a demographic donut hole in the workforce that may be partially ameliorated by genAI.
Matt Wood, Vice President of AI at AWS, outlined how enterprises will mix and match multiple models depending on use case, the need for orchestration and how regulated industries may have an advantage in adopting genAI.
Oracle continued to build out its generative AI tools for its Autonomous Database with conversational AI, a broad set of large language models, new analytics for knowledge graphs, spatial learning and no-code modeling.
JPMorgan Chase executives were asked a simple question during its third quarter earnings conference call: What's the benefit of spending the most on technology in the banking industry? The answer illustrates how every company is a tech company to stay relevant and customer engagement has to span multiple channels.
Generative AI is a boardroom issue and Microsoft and Google Cloud appear to have the better CXO narrative relative to AWS.
The two companies said the plan is to combine J.D. Power's vehicle configuration and performance data points with Palantir's Foundry and AIP platforms.