# GTC 2024 Day 2 Takeaways for the 'CUDA'

The second day of Nvidia's GTC 2024 has provided deeper insights into the company's vision for the future of AI, with a focus on the practical applications and implications of the technologies unveiled during the first day. Let's analyze some of the big takeaways from today and consider their potential impact on our ecosystem.
## Blackwell's Pricing and Availability

During a Q&A session with journalists, Nvidia CEO Jensen Huang provided more details about the pricing and availability of the Blackwell chip. While initially stating that the chip would be priced between $30,000 and $40,000, Huang later clarified that he wasn't intending to provide specific pricing, emphasizing that Nvidia's focus is on designing and integrating the chips into data centers rather than selling individual units.

Nvidia's Chief Financial Officer, Colette Kress, mentioned that the company plans to start shipping Blackwell chips "later this year," but also noted that supply constraints are likely after the initial launch. This information is crucial for the CUDA project, as it helps us plan for the potential adoption of Blackwell in our ecosystem and anticipate any challenges related to availability. We will need to carefully consider the implications of these supply constraints on our roadmap and adjust our strategies accordingly.
## The Generative AI Future

[Jensen Huang's keynote](https://www.nvidia.com/en-us/events/gtc/keynote/) on Day 2 delved deeper into the transformative potential of generative AI, describing it as a fundamental shift in computing. He emphasized that the future is generative, with the majority of content being created on-the-fly by AI systems that understand context and user preferences. This shift from retrieval-based computing to generative-based computing opens up new possibilities for businesses across various industries.

This insight highlights the importance of positioning our platform to support the growing demand for generative AI workloads. By providing a decentralized infrastructure for AI computation, we can enable businesses to harness the power of generative AI without the need for significant upfront investments in hardware and infrastructure. This presents a significant opportunity for our project to establish itself as a key player in the generative AI ecosystem.
## The Expanding Scope of Digitization

Huang pointed out that the generative AI revolution extends beyond text and images, encompassing any data that can be digitized and structured. This includes proteins, genes, brain waves, and more. As long as patterns can be learned from the data, AI systems can understand and generate new content.

This presents an opportunity to explore new application domains and partnerships. By collaborating with organizations in diverse fields such as [healthcare](https://www.nvidia.com/en-us/industries/healthcare-life-sciences/), [finance](https://www.nvidia.com/en-us/industries/finance/), and [scientific research](https://www.nvidia.com/en-us/industries/higher-education-and-research/), we can facilitate the adoption of generative AI in these sectors, unlocking new possibilities for innovation and discovery. Our decentralized platform can provide the computational resources needed to process and analyze vast amounts of structured data, enabling breakthroughs in fields such as personalized medicine, drug discovery, and financial modeling.
## Prompt Engineering: The Future of Programming?

During the Q&A session, [Jensen Huang](https://www.nvidia.com/en-us/about-nvidia/leadership/jensen-huang/) discussed the growing importance of prompt engineering in the era of generative AI. He suggested that prompt engineering might become a more essential skill than traditional programming, as it enables people to interact with and guide AI systems effectively.

As we develop our platform, we should consider incorporating tools and resources that support prompt engineering, making it easier for developers and users to leverage generative AI capabilities. By providing a user-friendly environment for prompt engineering, we can lower the barriers to entry and accelerate the adoption of generative AI within our ecosystem. This may involve developing intuitive interfaces, providing educational resources, and fostering a community of prompt engineering enthusiasts who can share knowledge and best practices.
## Nvidia's Expanding Partnerships

Day 2 of GTC 2024 highlighted Nvidia's expanding partnerships with major tech companies and industry leaders. Companies such as [Amazon Web Services](https://aws.amazon.com/), [Dell](https://www.dell.com/), [Google](https://about.google/), [Meta](https://about.fb.com/), [Microsoft](https://www.microsoft.com/), [OpenAI](https://openai.com/), [Oracle](https://www.oracle.com/), [Tesla](https://www.tesla.com/), and xAI are expected to adopt the Blackwell chip, showcasing the growing demand for advanced AI hardware.

For the CUDA project, these partnerships serve as a validation of the potential for decentralized AI computation. By aligning our platform with the needs and requirements of these industry leaders, we can position ourselves as a complementary solution, offering a decentralized alternative for businesses seeking to leverage generative AI capabilities. We should actively seek out opportunities to collaborate with these companies, exploring ways in which our decentralized GPU resources can enhance their AI initiatives and provide a scalable, cost-effective solution for their computational needs.
## Edge Computing and Inference Microservices
Nvidia's introduction of [NVIDIA Inference Microservices (NIM)](https://developer.nvidia.com/nvidia-inference-server) highlights the growing importance of edge computing and seamless AI deployment. NIM aims to simplify the process of integrating AI models into applications, making it easier for developers to leverage AI capabilities without deep expertise in AI infrastructure.
## Computational Medicine and Healthcare AI

Nvidia's partnerships with [GE Healthcare](https://www.gehealthcare.com/) and [Johnson & Johnson MedTech](https://www.jnjmedtech.com/en-US) underscore the growing application of AI in the healthcare sector. These collaborations focus on leveraging Nvidia's AI platforms to enhance medical imaging, surgical decision-making, and patient care.
This highlights the potential for our decentralized GPU resources to support computational medicine and healthcare AI initiatives. Collaborating with healthcare organizations and researchers, we can provide computational power for AI models development and deployment in a secure and privacy-preserving manner, aiding in medical image analysis, drug discovery, and personalized medicine.
## Future Plans and Testnet Deployment

As GTC 2024 concludes, the CUDA project's focus is on deploying our platform on a testnet environment. This phase is crucial for refining our smart contracts, APIs, and interfaces, ensuring platform stability and security.
To bolster this effort, we are welcoming two new team members specializing in blockchain development, distributed systems, and GPU computing - focusing on areas like smart contract optimization, decentralized storage integration with [IPFS](https://ipfs.io/) and [Filecoin](https://filecoin.io/), GPU resource management, and developing developer tools and SDKs.
This expansion marks a significant stride in our journey towards creating a decentralized, community-driven AI ecosystem.
## /END OF GTC 2024 DAY 2
Day 2 of Nvidia's GTC 2024 has been enlightening, emphasizing the necessity for the CUDA project to remain flexible and responsive to the evolving AI landscape. Our commitment to empowering developers, researchers, and businesses with decentralized AI computing remains steadfast as we anticipate the deployment and future growth of our platform.