CUDA - Tomato Soup https://www.wholetomato.com/blog Visual Assist Team Blog Sun, 03 Aug 2025 13:51:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://www.wholetomato.com/blog/wp-content/uploads/2025/05/favicon.ico CUDA - Tomato Soup https://www.wholetomato.com/blog 32 32 227787260 Introduction to CUDA development + How to set up with Visual Studio https://www.wholetomato.com/blog/intro-to-cuda-and-visual-studio-installation/ https://www.wholetomato.com/blog/intro-to-cuda-and-visual-studio-installation/#respond Wed, 05 Feb 2025 15:35:51 +0000 https://www.wholetomato.com/blog/?p=4040 Introduction Think about this. Have you ever thought about two things at once? If you reflect a bit, our brains are super complex but they only focus on one train of thought. Sure, a lot...

The post Introduction to CUDA development + How to set up with Visual Studio first appeared on Tomato Soup.

]]>
Introduction

Think about this. Have you ever thought about two things at once? If you reflect a bit, our brains are super complex but they only focus on one train of thought. Sure, a lot can happen subconsciously, but you can only be conscious about a single thing—you can’t focus on two things at once simultaneously.

But what if you could? This opens up a wide array of possibilities. Imagine learning from multiple sources, or solving three math equations in your head simultaneously, or literally multitasking with each hand doing something different.

That’s the idea behind how graphics processing units (GPUs) are being utilized to fast track development time for a few specialized technologies. With its capability to process significantly more threads (vs CPUs), they can execute tasks that require heavy parallel processing, such as rendering graphics, training machine learning models, and running complex simulations.

And one of the ways to program your GPUs to spit out data that isn’t just graphics is via a framework called CUDA. And that’s what we’re talking about in this blog today.

Why is CUDA being used now

CUDA, which stands for Compute Unified Device Architecture, speeds up computing tasks by using the power of graphics processing units (GPUs). It is a framework developed by NVIDIA in 2006. CUDA allows developers to write programs that divide large computing tasks into smaller ones using parallel computing. 

This uses the many cores of a GPU to perform multiple calculations simultaneously—unlike a CPU, which uses a few powerful cores optimized for sequential processing. This parallel processing capability significantly speeds up tasks that involve large datasets or complex computations, such as those found in graphics rendering, scientific simulations, and machine learning.

Nvidia’s CUDA has been around for more than two decades and due to the popularity and inherent compatibility with its parent company’s physical video cards, it has emerged as one of the leaders in the industry. And even though CUDA’s chokehold on the space is breaking, it remains a top choice for accelerating training for machine learning models.

Industries using CUDA 

We’ve talked about the advantages of using GPUs and how you can use CUDA to program them to work on specific tasks. The most popular use case now is the rise of machine learning and AI, but we’ve listed down a couple of other industries that you may not know about that can also utilize the advantage of GPU computing power.

Industry Task / Work Needed How CUDA-enabled programs help
Data Science & AI Deep learning training, NLP, recommendation systems Speeds up training of AI models exponentially, helping with things like chatbots and recommendation algorithms.
High-Performance Computing (HPC) Scientific simulations, physics calculations Speeds up complex science experiments and research.
Finance Risk modeling, high-frequency trading (HFT), portfolio optimization Computes complex financial calculations much faster which helps traders make quick decisions.
Autonomous Vehicles Object detection, sensor fusion, path planning Helps self-driving cars “see” and react to their surroundings in real time.
Manufacturing & Industrial Automation Predictive maintenance, defect detection, robotic control Helps machines spot problems before they happen and improves automation.
Weather & Climate Science Climate modeling, hurricane prediction, data assimilation Runs weather simulations much faster to improve forecasts.
Cybersecurity Anomaly detection, encryption/decryption, threat analysis Helps detect hackers and secure data faster.
Robotics Real-time sensor processing, AI-based control, SLAM (Simultaneous Localization and Mapping) Helps robots process what they see and move more accurately.
Blockchain & Cryptography Cryptocurrency mining, transaction validation Makes mining cryptocurrencies and securing transactions faster.

Challenges in learning CUDA development

While utilizing GPUs and programming them with CUDA is a rising framework, there is still a significant barrier to becoming a skilled CUDA programmer. Its biggest strength is also one of its complicating factors in learning. CUDA is designed for parallel computing, which is fundamentally different from traditional serial programming. Programmers need to grasp concepts like threads, blocks, and grids, and how they map to GPU hardware.

In addition to that, C/C++, a lower level language usually suited for intermediate developers, is arguably the language to learn if you want to maximize programming in CUDA (You can also opt for Python using PyTorch or Jax).

Lastly, CUDA requires a deeper knowledge on physical hardware (aka what NVIDIA GPU/s you’re using). There is extra setup involved both in hardware and software toolkits to access basic development and testing. Achieving high performance will also require studying the GPU architecture and careful optimization of code and tight memory management.

Setting up your first CUDA programming project

A CUDA .cu file with proper syntax highlighting and code analysis features opened in Visual Studio.

Starting with your first ever CUDA project may seem daunting but with the right directions, you can easily configure Visual Studio for CUDA programming projects in just an hour. Follow these steps below to get started:

Installing Visual Studio

Visual Studio is a good first option for an IDE if you are familiar with C++ already. It is compatible with the integration of the NVIDIA CUDA Toolkit which allows you to compile, debug, and optimize CUDA applications within the same platform.

  • Download Visual Studio

    First, download Visual Studio from Microsoft. Choose whatever edition you prefer. For our installation, we downloaded a community version of Visual Studio 2022 for as it’s the latest supported version for our Windows 11 system. 
  • Run the installer to complete the installation

    Follow the succeeding prompts until you get to the Visual Studio installer. It will ask you for a couple of things such as install directory and will check a couple of dependencies. Afterwards, you should be able to launch Visual Studio from this Window or from a shortcut.

Installing the CUDA Toolkit

With Visual Studio now installed, you will need the CUDA Toolkit download for Visual Studio. It provides the tools, libraries, and compiler (nvcc) needed to develop and run CUDA applications within Visual Studio. It enables integration for GPU-accelerated computing, which allows use of NVIDIA GPUs for high-performance tasks.

  • Verify you have a CUDA-compatible GPU
    To ensure smooth operations, first check if your current GPU is a supported device. You can do this by navigating to the Display Adapters section in the Windows Device Manager. For more information, visit NVIDIA’s install guide. 
  • Download CUDA Toolkit from NVIDIA

    Visit NVIDIA’s website to download and learn more about the toolkit. Before downloading, ensure that you have chosen the correct OS, version, etc. The download file in our case is 3.2 GB but please ensure you have at least 10 GB of free space as you still need to temporarily extract the installation files before running the installer.

  • Run the installer

    After downloading, run the installer. It will scan your device for any missing dependencies or pre-existing installs and adjust your installation files accordingly. Afterwards, you will now have the CUDA Toolkit installed on your system. Additionally, NSIGHT which provides debugging and profiling features specific for CUDA applications will also be installed.

    If you encounter any issues with installing the toolkit, consult NVIDIA’s installation and troubleshooting guide.

    Bonus tip: If you prefer Visual Studio Code, you should install Nsight from this link instead. It’s an application development environment for “heterogeneous platforms that brings CUDA development for GPUs” into Microsoft’s Visual Studio code instead.

Getting started with your first CUDA project in Visual Studio

After installing both Visual Studio and the CUDA toolkit, you are now ready to initialize your first project within Visual Studio.

  • Creating a new project.
    Start by opening Visual Studio and create a new project or clone an existing repository to start your first project file.
  • Initializing your project.

    At this point you have two options: either start a completely blank console/project or choose the CUDA 12.8 project. The main difference is that the CUDA Runtime comes pre-equipped with the usual workloads, sample code, and use cases.However, starting from scratch allows you to configure your project with only what you need and it also familiarizes you with the workspace. For this project, we’ll start with a completely blank project.
  • Setting your build configuration

    On the top of the Visual Studio window, choose Release and x64 (if you’re running a 64-bit system). This tells VS that we’re trying to build a version of an app that can be deployed, as opposed to debugging. 
  • Adjusting build dependencies

    You need to ensure that Visual Studio knows that you’re trying to build and execute CUDA files. To configure this, right click on your project name (“CUDA Sample”) and click on Build Dependencies ? Build Customizations. A new window will pop up that lists down available build customization files—be sure to tick CUDA 12.8 and hit ok.

  • Adding a CUDA C++ or Header file

    To add new source files, simply add new items as you would add any normal .cpp or .header file. Right click on a folder and click on AddNew Item to access your file options. 
  • Verifying file and project setup is correct
    At this point, we suggest trying to build a solution to ensure that everything is working smoothly. If nothing breaks, congratulations! You can now start working on your first CUDA file inside VS. NVIDIA also provides a few sample projects so you can test, debug, and familiarize with the setup using existing projects before creating a new one entirely.

Optimizing your setup

VS and NVIDIA have made giant strides in making CUDA development easier to access and set up. However, as CUDA is a proprietary language, there may still be some missing syntax highlighting or confused prompts from VS’s IntelliSense. 

To alleviate this, it is recommended to install supplementary plugins from the Visual Studio marketplace that can help with properly highlighting symbols. For example, you can download and install the Visual Assist plugin which adds support for CUDA-specific code that Visual Studio’s IntelliSense might not recognize yet. It also comes with the added benefit of providing its core features of navigation, refactoring, code assistance, and more, on top of the added support for .cu and .cuh files.

visual assist for C++ CUDA development

The Visual Assist plugin adds support for recognizing CUDA-specific code. VA recognizes you are using a symbol that references a missing header file and adds it for you.

Conclusion

While CUDA is a powerful tool that is likely to remain significant in the near future, the landscape of parallel computing is dynamic, and its dominance will depend on technological advancements and shifts in industry needs. But given the rapid growth of AI and machine learning, CUDA is likely to remain relevant due to its optimization for deep learning tasks, especially as NVIDIA continues to innovate in this space. 

In summary, if you’re looking to expand on your software development skills into a growing and forthcoming space, then learning CUDA could be it for you. 

The post Introduction to CUDA development + How to set up with Visual Studio first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/intro-to-cuda-and-visual-studio-installation/feed/ 0 4040
Visual Assist 2024.3 release post https://www.wholetomato.com/blog/visual-assist-2024-3-release-post/ https://www.wholetomato.com/blog/visual-assist-2024-3-release-post/#respond Thu, 02 May 2024 20:42:22 +0000 https://www.wholetomato.com/blog/?p=3811 Another Visual Assist update?! VA 2024.3 is headlined by a dramatic improvement to the performance of Find References. This release also features both a fix and an improvement related to Move Implementation. We also have...

The post Visual Assist 2024.3 release post first appeared on Tomato Soup.

]]>
Another Visual Assist update?! VA 2024.3 is headlined by a dramatic improvement to the performance of Find References. This release also features both a fix and an improvement related to Move Implementation. We also have some key features exiting their beta phase (try them out!). Lastly, performance for C# should be better than ever with key fixes rolling out in this release.

Download the release now from our website.

Better find references results in multiple faster features

If you’ve updated to at least Visual Assist 2024.1, you may have been enjoying the benefits of the significantly improved parser performance that cut initial parsing time fifteenfold. In this release, we’ve added something even bigger: performance improvements not at startup, but all the time

Find references, the feature that looks for symbol usage within the current project or solution, has been greatly improved for performance and speed. But the Find References engine is used for many other common and key features in Visual Assist! Renaming finds references in order to rename them; implement methods finds methods in order to know which ones do and do not exist; and so forth. That means that this performance improvement applies to many key features and navigations; Rename, Change Signature, Implement Methods and more.

Visual Assist’s Find references window. Takes significantly less time to find all references in 2024.3.

Test Results

The development team ran a few tests to compare the performance of find references between the new Visual Assist version versus an older version of the same plugin. Furthermore, they also tested it against the performance of Visual Studio’s default Find References. 

The test was done on Unreal Engine 5.3 source code using Lyra game examples with two symbols: TOptional and MakeBox as the basis for which references are to be searched. The test was done using Visual Studio 2022 17.8 and Visual Assist 2024.3 & 2024.2. Time was measured from the start of Find References to all references found.

The result of the tests are as follows:

Setup 1 – TOptional:

Run 1 Run 2 Run 3 Average
Visual Assist 2024.3 5:11 4:25 4:17 4:37
Visual Assist 2024.2 14:27 18:02 13:12 15:13
Visual Studio 2022 38:26 * * 38:26
Setup Specs:AMD Ryzen 7, 7800X3D processor, Team T-Force Delta 32GB (2 x 16GB) 288-Pin PC RAM, Crucial T700 Gen5 NVME M.2 SSD
* Test timeout. 

 

Setup 2 – MakeBox:

Run 1 Run 2 Run 3 Average
Visual Assist 2024.3 0:42 0:45 0:43 0:43
Visual Assist 2024.2 1:41 1:40 1:34 1:38
Visual Studio 2022 2:34 2:22 2:27 2:27
Setup Specs:AMD Ryzen 7, 7800X3D processor, Team T-Force Delta 32GB (2 x 16GB) 288-Pin PC RAM, Crucial T700 Gen5 NVME M.2 SSD

As one can surmise from the results, the latest update brings Visual Assist’s symbol finding performance well above that of default Visual Studio’s and other similar plugins. Further testing on other platforms will be undertaken. Please refer back to this page later for more testing.

Exiting Beta: CUDA core development support & Move Class feature

Two VA features enter their stable phase and are now on general availability. If you have not tried these yet, we highly recommend trying them out as it provides a lot of usefulness that might not be readily apparent.

  • CUDA support
    First added in 2023.4, CUDA support allowed Visual Assist to recognize CUDA files and parse and highlight them like regular C/C++ files. This feature now enters full supported status and you can reliably use Intellisense-like features for CUDA files.
  • Move Class feature
    Refactoring and moving entire classes can sometimes be a hassle. This feature moves from beta to supported status and allows you to easily choose an entire class and port it over to file/s of your choosing.

Create File: specify a directory + auto implementation.

This is a tiny but useful quality of life change for creating files. Prior to this change, Visual Assist would sometimes display a failure error and ask you if you wanted to Create File or to stop if a target was not found. Now, it runs create file automatically and you can hit Cancel instead.

Furthermore, a bug fix for when using create file: Visual Assist will consistently move the implementation afterwards. (In the past, it sometimes failed to do so.) 

These two changes will hopefully make your experience more seamless and intuitive.

Discord link and feedback options in the Help menu

Introducing our newly opened Discord server for all Visual Assist users. We’re hoping for this hub to function like our forums wherein users can request for changes, report bugs, and share useful information and tips around the plugin.

As it’s a WIP, anyone who is interested in helping us manage and build the community is welcome to do so. Send us a message here if you’re interested.

Furthermore, we’ve added new feedback channels in one of our menus. Navigate to Help and browse new feedback options and let us know what you think!

Bug fixes and improvements

Apart from the above major fixes, we have a couple of minor bug fixes and QoL changes. The complete list is below: 

  • Fixed issue where Move Implementation would not move the implementation if a new file needed to be created.
  • Improved editor performance when editing C#.
  • Fixed Add Include issue where C headers would sometimes be added instead of their C++ counterparts.
  • Fixed issue where Move Class to New File would sometimes not be offered near macros.

Send us a message or start a thread on the user forums for bug reports or suggestions.

Visit our download page to update to the latest release manually. Happy coding!

The post Visual Assist 2024.3 release post first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/visual-assist-2024-3-release-post/feed/ 0 3811
Visual Assist 2023.4 now released https://www.wholetomato.com/blog/visual-assist-2023-4-released/ https://www.wholetomato.com/blog/visual-assist-2023-4-released/#respond Thu, 17 Aug 2023 20:44:35 +0000 https://blog.wholetomato.com/?p=3336 VA 2023.4 is now published and is now available to download!  This release marks a major milestone in Visual Assist’s history as it starts its official support for Unity engine development. Also in this release:...

The post Visual Assist 2023.4 now released first appeared on Tomato Soup.

]]>
VA 2023.4 is now published and is now available to download

This release marks a major milestone in Visual Assist’s history as it starts its official support for Unity engine development. Also in this release: start of support for CUDA development for C/C++ and numerous parser improvements. Read on further to get the complete details of the changes and improvements in this release.

Start of official support for Unity

It’s been a long time coming but Whole Tomato is glad to announce that the upcoming 2023.4 build will feature the first of many Unity-specific features. Nope, not the hivemind—we are of course talking about the very versatile game engine and game development platform.

For those unaware, the Unity engine is the backbone of both 2D and 3D games ranging from wildly popular and suspicious games, all the way to full blown highly-acclaimed triple A titles.

Visual Assist has been popular for helping game developers deal with complex C++ code. Starting from the upcoming release, Visual Assist will expand its focus to C# game development. Users can expect VA staples such as refined navigation, intelligent autocomplete, code refactoring, and the like to work as well for C# work.

Furthermore, users can also submit feature requests specific for Unity development. We are starting with shaders—more on this below—but if you have any suggestions as to what features are missing in your Unity development, do let us know by emailing support.

Shaders for Unity

The start of official support for Unity development is headlined by shader file support. Similar to our previous addition of supporting HLSL, we are kicking off Unity updates by adding its shader files to our list of supported languages.

CUDA C/C++ Development

If you are a data scientist, software engineer, or a plain hobbyist looking to harness the power of your GPU for general purpose programming tasks, then you would most likely know about Compute Unified Device Architecture (CUDA). This programming model developed by Nvidia allows programmers to utilize the multi-core performance of graphics cards for other non-graphic applications (although it’s perfectly fine to use for 2D/3D too!)

If you are interested in CUDA, then rejoice! VA 2023.4 also marks the start of official support for CUDA development. Visual Assist’s can now parse and analyze CUDA related syntax, libraries, and APIs so you can have IntelliSense-like features, navigation, and highlighting for CUDA (.cu) files.

A CUDA file with proper syntax highlighting and code analysis features.

Parser Improvements: template functions with auto / trailing return type and std::tuple autocompletes 

With VA 2023.4 will now properly highlight and parse trailing return type features that bypasses a C++ limitation where the return type of a function template cannot be generalized if the return type depends on the types of the function arguments. This release specifically deals with some of the edge cases reported by our users.

Trailing return type features can be used by declaring a generic return type with the auto keyword before the function identifier, and specifying the exact return type after the function identifier. Learn more about it here.

The parser is aware of sum and proper syntax highlighting and navigation features are applied.

Also fixed in this release are initializations of std::tuple autocompletes. This improves how the VA parser handles certain templated types. In the end, users will find better completion suggestions when you are typing in your codebase, such as when typing std::tuple.

Better Add Include logic

Visual Assist can add include directives for headers that resolve unknown symbols in the current C++ source file. The underlying logic for add include has been improved for better context-awareness resulting in better predictions on where to place the new include.

Add include now inserts new lines in most logical place.

Add include can be accessed by hovering over unknown symbols and opening the quick actions and refactoring menu ( Shift + Alt + Q ).

Some other spring cleaning-type improvements

We’ve also made some changes to a few minor things to the UI and the options in the app that you should know about. Firstly, our shader support has been available for a few rounds of releases already and we’re excited to announce that it has finally finished its beta phase and will now be enabled by default. 

Secondly,  we’ve streamlined our game Development tab of our options dialog. This is to make room for upcoming additions (stay tuned!)

Thirdly, we’ve tweaked some tomatoes and icons along the way to better respond to your actions and better display what options are available to you. Relevant options and menus will be emphasized when they are needed; secondary options will subtly fade into the background otherwise. This is in line with our commitment to distraction free coding.

Lastly, if you’ve missed or haven’t installed the latest version yet, you may have noticed that the Visual Studio marketplace listings for the 32 and 64-bit versions of Visual Assist have now been combined. Versions 2010 – 2022 will now be accessible from one listing.

Bug Fixes

  • Fix for ‘VaMenuPackage’ package error affecting VS2022 17.7.0 3.0 load
  • Fixed issue where some types with leading macros before template definitions were not parsed correctly.
  • Fixed issue where autocomplete of some types, such as std::tuple, would produce partial results.  
  • Fixed rendering of suggestion list tomato icons in Visual Studio 2022. 
  • Fixed issue where the VA Navigation Bar could become smaller than intended.
  • Fixed Code Inspections error that could happen in some cases in Visual Studio 2022 17.6+. 

Thanks to those who submitted their feedback and bug reports. Keep ‘em coming. Send us a message or start a thread on the user forums for bug reports or suggestions.

Contrary to the preview blog statement, VA 2023.4 is a bit different as it will be released simultaneously—no rolling release mechanism as it includes some crucial updates we want to share to everyone as fast as possible. You can also check our download page to manually update to the latest release too. Happy coding!

 

The post Visual Assist 2023.4 now released first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/visual-assist-2023-4-released/feed/ 0 3336