Project 1

This an AI project but also I also call it covering the oldies. What the project is setup to do is to test the difference between using Ollama with an Nvidia video card and a non nvidia video card, essentially not will it work or not but why you should actually use a Nvidia card. It is not marketing talk, a non Nvidia card ( in this case an older ATI Radeon ) doesn’t possess the extra power that an Nvidia cuda core can provide but it not a bottleneck that can not be overcome.

There is a significant difference between a system with an Nvidia card and a non Nvidia card but other components can overcome this difference. A faster modern CPU of either brand (Intel/AMD), fast ram such as DDR5, a newer hard drive type help to crunch the numbers faster. There are currently recommended specs to make Ollama which is a open source application that will run many different LLMs (large life models, models that do Natural Language Processing, think ChatGPT that you can customize) run faster. So while there is a difference, you can offset that difference by following the recommended specs. In other words get the fastest CPU you can, the most, fastest ram, the largest fastest hard drive (notice the theme here) and while Nvidia video cards seem to be on top now, there are now competitors that can go toe to toe with them.

Here is where the covering the oldies part comes in, I’m running this project on a AMD Ryzen 7 with 48 gigs of ram on two 256gb ssd drives and an old Radon. I have a vmware 8 esxi hypervisor running 3 linux systems. One of the Linux is a ubuntu 24.04 web server that is host to my website fxartstudios.com which uses Apache 2 as the webserver which runs the ollama in server mode in one of two docker containers and to top it off I have a DNS Tunnel to cloudflare which handles the DNS services. Gee, that was a mouthful. This is the Non Nvidia system and the Nvidia system is on a AWS instance that uses Ubuntu 22.4 , 16mb ram, and has a 80gb NFS drive as well as Nvidia video. I could have done the speed test easier by having two Instances running on AWS but Cloud computing can get expensive for smaller experiments. So I only have the cloud instance running when I need it to run.