News

Install your own “ChatGPT” locally with NVIDIA Chat With RTX

The growing use of generative AI tools such as Google Gemini and ChatGPT have become an integral part of our daily lives. These tools, based on large-scale language models, have long been associated with cloud-based applications. However, with the launch of Chat with RTX by Nvidia, it is now possible to enjoy these models directly from your own computer, without requiring an internet connection.

To get started, make sure you have the latest drivers for your Nvidia graphics card and download the Chat with RTX app. This application, compatible with Windows 11 and requiring a GeForce RTX 30/40 or RTX Ampere or Ada graphics card with at least 8 GB of VRAM and 16 GB of RAM, offers a user-friendly and simple to use experience.

It's important to note that Chat with RTX is currently considered a "demo" by Nvidia, which means it may have bugs and limitations. Additionally, the app download is substantial, weighing around 35 GB due to the integration of several large language models. Once installed, the application will be accessible from your Start menu.

Chat with RTX's interface is intuitive, allowing you to select the source AI model between Mistral and Llama, then send your prompts as you would with other similar tools. Features include text generation based on documents you provide, as well asYouTube video analysis to offer answers based on their transcriptions.

Although Chat with RTX is still in the development phase, it already offers exciting features for users interested in exploring the potential of generative AI language models. With the ability to add your own data and analyze YouTube videos, this application opens up new perspectives for the use of AI for a variety of purposes.

Leave comments

Your email address will not be published. Required fields are marked with *