Colab notebook (https://colab.research.google.com/ ) - Colab's impact is the capability to import Google Gemini models while running Jupyter Notebook service.
Gemini LLM model (https://aistudio.google.com) - Google Gemini model is a large language model (LLM) freely available by Google that can be accessed using Google AI Studio or Colab.
Azure OpenAI Service (Microsoft OpenAI) - Microsoft Azure OpenAI Service features OpenAI's language models (e.g., GPT3.5, GPT4) that are limited to Microsoft clients accessing Microsoft AI Studio.
Nvidia CUDA code (https://developer.nvidia.com/cuda-toolkit ) - NVIDIA® CUDA® Toolkit is a development software environment for creating high-performance, GPU-accelerated applications. The toolkit includes GPU-accelerated libraries, a C/C++ compiler, and a runtime library.
Mathworks and Nvidia CUDA code (https://www.mathworks.com/campaigns/offers/generate-cuda-gpu-code-matlab.html?s_tid=vid_pers_ofr_recs ) - MathWorks® is an algorithm development software environment for creating and simulating scientific theories, models, and applications. The MATHLAB code for GPU-code generation uses Python for workflow and optimized CUDA code in NVIDIA GPUs (e.g., NVIDIA DRIVE, Jetson, and Tesla) for deep learning frameworks.
Open WebUI (https://www.openwebui.com/ ) - Open WebUI is an extensible, self-hosted AI interface that operates entirely offline.
Ollama (https://ollama.com/ ) - Ollama is a lightweight, extensible framework that can run large and small language models on a local machine.