GPU in AI

Gia Huy ( CisMine)
2 min readAug 31, 2024

--

Nowadays, with the rapid development of AI, the demand for its use is increasing, leading to a growing abundance of data and a wider variety of tasks. This makes problems in Machine Learning and Deep Learning more time-consuming and memory-intensive to process. In this series, I will guide you on how to optimize Machine Learning and Deep Learning tasks by using GPUs, aiming to enhance performance and efficiency.

What will you learn?

GPUs in AI will help you handle Machine Learning and Deep Learning tasks more effectively in all aspects (accuracy, speed, and memory efficiency), from basic to advanced levels, and it will be completely understandable.

In this series, you will learn how to optimally and appropriately use GPUs when applying them to AI in general, and Machine Learning and Deep Learning in particular. Rest assured, there will be code from scratch as well as guidance on setting up the necessary environments and packages.

Note: Since this series focuses on Machine Learning and Deep Learning, you should have a background in:

  • Machine Learning: A good understanding of Classification, Regression, and Clustering
  • Deep Learning: A good understanding of CNN (Convolutional Neural Network) and Gradient Descent

Please note that for Deep Learning, I will be using PyTorch exclusively (the reasons for choosing PyTorch over TensorFlow will be explained in future posts)

--

--

Gia Huy ( CisMine)

My name is Huy Gia. I am currently pursuing a B.Sc. degree. I am interested in the following topics: DL in Computer Vision, Parallel Programming With Cuda.