
Oracle and Nvidia announce an integration between Nvidia’s inference and accelerated inference software with Oracle AI infrastructure and Generative AI Services. The purpose of companies is to help organizations accelerate globally the creation of AI applications.
With the integration between the Oracle Cloud Infrastructure (OCI) and the Nvidia Ai Enterprise software platform, More than 160 AI tools and more than 100 Nvidia Nim microservices will be available natively through the OCI console.
In addition, Oracle and Nvidia are collaborating in the implementation without Oracle and Nvidia’s AI BLUEPRINTS code and in the acceleration of AI vector research in Oracle Database 23ai with the Nvidia CuVs library.
Related News:
About the partnership

With the partnership, Nvidia Ai Enterprise will be natively available through the OCI consoleallowing customers to quickly and easily access AI tools. The forecast is to reduce the necessary time to implement models.
Integration includes the NVIDIA NIM – A set of over 100 optimized and native inference cloud inferences for the main AI models. Among them are the latest nvidia llama norotron models for advanced AI reasoning.
Nvidia Ai Enterprise will be available as an implantation image for OCI bare instances and Kubernetes clusters using OCI Kubernetes Engine. OCI Console customers benefit from direct revenue and customer support through Oracle.
Organizations can implement OCI’s more than 150 AI and cloud services with Nvidia’s accelerated computing and Nvidia Ai Enterprise on Data Center, Public Cloud or Edge. As a result, organizations have an integrated IAS pile to help meet data privacy requirements, sovereign and low latency.
Blueprints

OCI AI Bluprints provide code implementation models that allow you to quickly perform AI work loads. In addition, they offer clear hardware recommendations for Nvidia GPUS, NIM microservices and pre-camped observability tools.
Nvidia Blueprints offers developers a unified experience, providing reference work flows for corporate use cases. With them, organizations can create and operationalize Custom AI applications with Nvidia Ai Enterprise and Nvidia Omniverse software, application programming interfaces and microservices.
Both Oracle and Nvidia plan to simplify the development, implementation and scaling of AI and simulation applications. To do so, The Nvidia Omniverse platform and the Nvidia Isaac SIM developmental workstations and the Omniverse App streaming kit should be available at Oracle Cloud Infrastructure Marketplace later this year.
Integration will bring preconfigurations with bare-metal computing instances accelerated by the Nvidia L40S GPUs.
AI inference in real time

With the partnership, companies want to allow data scientists to access Nvidia Nim microservices pre-endered directly in OCI Data Science. This supports cases of real -time AI inference cases without the complexity of managing the infrastructure.
The models will be executed at the client’s OCI lease, with the option of billing per hour of use or applying Oracle Universal Credits.
Organizations can use this integration to deploy inference endpoints with pre-configured and optimized NIM mechanisms in minutes. This speeds up the return time for use cases, such as AI technology assistants, real -time recommendation mechanisms and co -pilots.
With the facilitation of use, customers can start using smaller workload integration and climb perfectly for implementations throughout the company. And acceleration in development and implementation should reduce companies’ costs.
Oracle Database 23ai
Oracle and Nvidia are working together to speed up the creation of vector embeddings and vector indices using GPUs Nvidia and Nvidia cuvs. These are intensive computing parts of AI Vector Search workloads in Oracle Database 23ai.
Organizations will be able to incorporate vectors through mass vectorization of large volumes of input data, such as text, images and videos, as well as rapid creation and maintenance of vector indices.
With the integration of AI Vector Search with Oracle Database, there is an improvement of AI pipelines performance and it helps support high volume AI vector workloads.
NVIDIA Blackwell on the eyes

OCI will be among the first cloud service providers to offer the next generation of the accelerated computing platform Nvidia Blackwell. Created based on the innovative Blackwell architecture presented a year ago, Blackwell Ultra includes the Nvidia GB300 NVL72 rack scale solution and the NVIDIA HGX ™ B300 NVL16 system.
The GB300 NVL72 offers AI 1.5 times more than the NVIDIA GB200 NVL72, as well as increasing Blackwell’s revenue opportunity by 50 times for AI factories. The numbers are compared to those built with Nvidia Hopper.
Source: Nvidia.

Join the Adrenaline offers group
Check out the main offers of hardware, components and other electronics we find over the internet. Video card, motherboard, RAM and everything you need to set up your PC. By participating in our group, you receive daily promotions and have early access to discount coupons.
Enter the group and enjoy the promotions
Source: https://www.adrenaline.com.br/nvidia/oracle-e-nvidia-colaboram-para-ajudar-as-empresas-a-acelerar-a-inferencia-de-agentes-de-ia/