c++ - Tomato Soup https://www.wholetomato.com/blog Visual Assist Team Blog Fri, 15 Aug 2025 10:42:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://www.wholetomato.com/blog/wp-content/uploads/2025/05/favicon.ico c++ - Tomato Soup https://www.wholetomato.com/blog 32 32 227787260 Do I Need To Know C++ For Unreal Engine? The Updated 2025 Guide https://www.wholetomato.com/blog/do-i-need-to-know-c-for-unreal-engine/ https://www.wholetomato.com/blog/do-i-need-to-know-c-for-unreal-engine/#comments Thu, 24 Jul 2025 22:33:54 +0000 https://blog.wholetomato.com/?p=2409 Quick Answer: While C++ isn’t strictly required for Unreal Engine development thanks to Blueprint visual scripting, learning it unlocks advanced capabilities and significantly expands your development options. For beginners, you can start with Blueprints and...

The post Do I Need To Know C++ For Unreal Engine? The Updated 2025 Guide first appeared on Tomato Soup.

]]>
Quick Answer: While C++ isn’t strictly required for Unreal Engine development thanks to Blueprint visual scripting, learning it unlocks advanced capabilities and significantly expands your development options. For beginners, you can start with Blueprints and gradually learn C++ for Unreal Engine as your projects grow more complex.

Nintendo switch uses C++

C++ is used to program and create video games on different platforms.

What This Guide Covers

Whether you’re a complete beginner or transitioning from another engine, this comprehensive guide answers the most common questions about C++ and Unreal Engine development. You’ll learn when C++ is necessary, what alternatives exist, and how to make the best choice for your project goals.

The Short Answer: Blueprints vs C++

You can absolutely create games in Unreal Engine without knowing C++. Unreal’s Blueprint visual scripting system allows you to build complete games using a node-based visual interface instead of traditional code. Many successful indie games have been built entirely with Blueprints.

However, C++ becomes valuable when you need:

  • Maximum performance optimization
  • Complex gameplay mechanics
  • Custom engine modifications
  • Integration with third-party libraries
  • Advanced AI systems

However to get the most out of UE and improve at the fundamentals, you should not be using blueprints or C++ exclusively. Ideally, you should learn how to use both. If you want to learn more about C++ vs Blueprints , we’ve discussed in another article about when to use Blueprints or C++ when developing games.

Is Unreal Engine good for beginners?

Unreal Engine is a great game engine for beginners as it provides access to a lot of templates and assets completely for free (unless your game earns >$1M gross annually) . However, it is also expansive and powerful enough for experienced developers as well. If you are familiar with other platforms, such as Unity or previous Unreal Engine versions, you will be able to jump right in and start creating video games using Unreal Engine C++. A virtual game and graphic studio that specializes in Unreal Engine C++ development can also be a great resource for learning the language and developing your skills.

The process of developing a game with Unreal Engine is not difficult to understand, but it does require a lot of time and practice, knowledge of the language, and commitment. And one of the very first questions is: where do I begin?

Do you need to know how to code for Unreal Engine?

Creating entire games with Unreal Engine can be a daunting task, but with the right knowledge and skills, you can make amazing programs. Some basic knowledge of coding—and C++ to an extent—is required, but it is not necessary to be an expert. The Unreal Engine is not just intended for developers but also for creators; and a game programmer is not limited to working with Unreal Engine.

It is even possible to create full-fledged games without any coding background. Popular gaming engines like Unity or Unreal Engine offer visual scripting tools or no-code solutions for managing game assets. Unreal has its Blueprint scripting process wherein you can use nodes to replace normal programming logic.

But if you want to dive into the nitty gritty, learning about the fundamental language of which the engine is based on a surefire way to greatly increase both your options and your efficiency. Additionally, many other game development platforms, such as Unity and GameMaker, use similar coding languages. Knowing how to code for these platforms will help you get started in the game development industry.

Learning Path Recommendations for Complete Beginners

  1. Start with Blueprint Fundamentals
  2. Learn Basic C++ Outside Unreal
    • Master fundamental programming concepts
    • Practice with simple console applications
    • Understand object-oriented programming principles
  3. Transition to Unreal C++
    • Start with simple C++ components
    • Gradually replace Blueprint functionality with code
    • Learn Unreal-specific C++ conventions and macros

READ MORE: Install and set up Unreal Engine with Visual Studio.

When is C++ essential then?

C++ coding becomes essential when you’re dealing with specific use cases and the blueprints system is not sufficient anymore.

• Performance-Critical Applications

C++ provides direct memory management and system-level control that Blueprint scripting cannot match. For AAA games, VR experiences, or applications requiring 60+ FPS with complex systems, C++ often becomes necessary.

• Advanced Game Systems

While Blueprints excel at prototyping and standard gameplay, certain advanced features require C++ implementation:

  • Custom rendering pipelines
  • Specialized physics calculations
  • Multi-threaded operations
  • Platform-specific optimizations

• Professional Development

Most professional game studios expect C++ knowledge for Unreal Engine positions. Understanding both Blueprint and C++ makes you more versatile and employable in the game development industry.

• Custom Gameplay Mechanics

With C++, you can implement complex gameplay logic that goes beyond what is possible with Blueprints. This includes creating custom character controllers, AI behaviors, and game rules.

• Creating components and 3D environments

Components are the basic building blocks of Unreal Engine. Components can be used to create 3D environments, menus, and other user interface elements. These components can be exported to other platforms.

• Advanced AI Systems

Create sophisticated AI systems using C++ for behavior and decision-making processes for non-player characters (NPCs) and other game elements such as custom pathfinding algorithms, decision-making systems, and behavior trees.

• Create logic and integrate with scripts

Logic is the code that controls how players interact with each component. Scripts are a special type of code that is more visual. Using both C++ and scripting for Unreal  allows for seamless development in their games.

• Test and debug games

Testing and debugging games is an important part of the game development process. When you work with mechanics created using C++, verifying that component will most likely require C++ knowledge as well. Problems that can be debugged include crashes, missing textures, and incorrect game logic.

Blueprint vs C++ Performance Reality

When Performance Differences Matter

The performance gap between Blueprint and C++ varies significantly by use case:

  • UI and Menu Systems: Minimal difference
  • Simple Gameplay Logic: Negligible impact for most games
  • Heavy Calculations: C++ shows clear advantages
  • Frame-Critical Systems: C++ often necessary for consistent performance

Hybrid Approach Benefits

Most successful Unreal projects use both systems strategically. Learn more here.

  • Blueprints for: UI, game flow, designer-friendly tweaking
  • C++ for: Core systems, performance-critical code, complex algorithms

Development Environment Setup

If you’ve decided to learn C++ for Unreal Engine, it’s best to take the best equipment on your journey!

Recommended Tools

Primary IDE: Visual Studio is our top choice due to the following:

  • Access to Visual Assist for enhanced C++ IntelliSense and navigation
  • Accessible for learning due to free community edition
  • Unreal Engine integration extensions
  • Version control integration (Perforce or Git)

Optimization for Productivity

Modern development requires efficient tooling. Visual Studio’s default C++ support, while functional, can feel limited when working with Unreal’s complex codebase. Supplementary tools like Visual Assist significantly improve:

  • Code navigation and search capabilities
  • Enhanced syntax highlighting for Unreal macros
  • Improved auto-completion and error detection
  • Better refactoring tools for large codebases

Common Beginner Mistakes to Avoid

• Overcommitting to One Approach

New developers often choose either Blueprint-only or C++-only approaches. The most effective strategy combines both systems based on specific needs.

• Ignoring Optimization Early

While premature optimization is problematic, understanding performance implications from the start prevents costly rewrites later.

• Neglecting Documentation

Unreal Engine’s documentation is extensive. Regularly consulting official docs, community forums, and example projects accelerates learning significantly.

READ: Industry Perspective: What Game Studios Expect From You

Making Your Decision

Choose Blueprint-First If You:

  • Are new to programming or game development
  • Want to see results quickly and stay motivated
  • Focus on design and creative aspects over technical implementation
  • Plan to work primarily on smaller or indie projects

Prioritize C++ Learning If You:

  • Have existing programming experience
  • Aim for positions at larger game studios
  • Want maximum control over performance and implementation
  • Plan to work on technically demanding projects

Conclusion: Your Path Forward

The question isn’t whether you need C++ for Unreal Engine—it’s about understanding when each tool serves your goals best. Blueprint provides an excellent entry point that can take you surprisingly far, while C++ offers the power and flexibility for advanced development.

Start with Blueprint to build confidence and understanding of game development concepts. As your projects grow in complexity and your skills develop, gradually incorporate C++ where it provides clear benefits. This progressive approach ensures you’re always working with tools appropriate to your current skill level while building toward more advanced capabilities.

Remember that both Blueprint and C++ are valuable skills in the modern game development landscape. The most successful Unreal Engine developers understand both systems and use them strategically to create engaging, performant games.

Next Steps:

  • Download Unreal Engine and complete the official Blueprint tutorials
  • Join the Unreal Engine community forums and Discord
  • Start with simple projects and gradually increase complexity
  • Consider supplementing your IDE with productivity-enhancing tools like Visual Assist

The journey from Blueprint beginner to C++ expert takes time, but each step opens new creative and professional possibilities. Your games—and your career—will benefit from this comprehensive skill set.

Highly Recommended for Unreal C++ 

If you do decide to code using C++ for Unreal Engine, you will most likely download Visual Studio, the official IDE of choice for developing C++ games in Unreal Engine. It provides an extensive list of navigations, refactoring, auto-suggestions and syntax highlighting for C++ development.

However, Visual Studio also caters to C/C# and unfortunately, the support and tooling for C++ may seem relatively weaker at first glance. Furthermore, Unreal Engine has bespoke coding elements and syntax. This may lead to frustrations when developing Unreal C++ games in the IDE because some basic navigations and features such as syntax highlighting may be unresponsive, or may be unavailable completely.

For these cases, it is highly recommended to install a supplementary plugin like Visual Assist which improves the overall IDE experience and replaces the frustrating elements with tailored features made for C++ Unreal Engine development. It makes the IDE features responsive and adds “understanding” so that basic features such as code highlighting, search, and auto-suggestions work properly.

The post Do I Need To Know C++ For Unreal Engine? The Updated 2025 Guide first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/do-i-need-to-know-c-for-unreal-engine/feed/ 1 2409
Visual Assist 2025.3 release post https://www.wholetomato.com/blog/visual-assist-2025-3-release-post/ https://www.wholetomato.com/blog/visual-assist-2025-3-release-post/#respond Thu, 03 Jul 2025 19:19:45 +0000 https://www.wholetomato.com/blog/?p=4262 Visual Assist 2025.3 is now public and available to download.  This release improves developer experience by updating the feedback UI when using some of our added features from recent releases. We’ve also updated our options...

The post Visual Assist 2025.3 release post first appeared on Tomato Soup.

]]>
Visual Assist 2025.3 is now public and available to download. 

This release improves developer experience by updating the feedback UI when using some of our added features from recent releases. We’ve also updated our options dialog’s look and feel alongside some of line highlighting options. We’ve also fixed many of the bugs and issues based on user reports.

The highlight of this release is a new option when using VA’s extract method so you can now fine-tune the parameter list—which includes selecting variables, excluding unnecessary ones, or arranging their order. 

On the visual feedback side, we’ve enhanced the popup interface when using Replace Auto With Exact Type. Additionally, macros expansion will also have its context revealed upon hovering. Learn more about these changes by going through our release blog post.

Download the release now by visiting our website download page.

Enhanced Extract Method with parameter customization

Visual Assist’s Extract Method feature now offers full parameter customization through an intuitive dialog interface. When extracting code into a new method, developers can now:

  • Add, remove, or reorder parameters before the method is created
  • Modify function signatures using natural coding language syntax
  • Make extracted methods more general by adding custom parameters

This enhancement skips most of the post-extract method editing, instead, a smarter interface guides you to adjust the extracted method as Visual Assist creates the implementation.

This is unlike most rigid UI implementations found in other tools. Visual Assist uses its intelligent parsing to understand your code modifications, providing a more natural and flexible experience.

New editing options for extract method. Edit name, move, or reorder parameters.

How it works: Select code you want to extract, choose Extract Method under the quick actions menu, and customize the function declaration in the dialog using standard C++ syntax. Use VA’s updated UI to create the optimized method accordingly.

Macro Expansions on Hover (Quick Info)

This was added based on a request from a user who was developing in Unreal Engine (UE) in Visual Studio. Many UE users turn off the built-in IntelliSense and just rely solely on VA’s features in order to maximize performance on large codebases—which is usually associated with the size of Unreal projects. Unfortunately, this also means that the macro expansion info provided by IntelliSense is also removed.

With this new change, however, VA can now display macro expansions instantly when you hover over macro definitions, providing immediate insight into complex preprocessor directives without interrupting your workflow.

Hover over macro definitions to show its expansion instantly.

Improved dot to arrow conversion now supports for auto pointers

VA’s dot-to-arrow conversion automatically changes . to -> when accessing members through pointers, eliminating the need to manually switch between dot and arrow operators.

With this update, however, the dot to arrow conversion feature now handle auto pointer declarations better. The plugin now recognizes explicit pointer hints in auto variable declarations, providing more accurate code completion and conversion.
Example:


int myInt = 1;
int* myIntPtr = &myInt;

auto myAutoPtr = &myInt;      // Implicit pointer
auto* myExplicitAutoPtr = &myInt;  // Explicit pointer - now detected!

In the above example, “myAutoPtr” and “myExplicitAutoPtr” variables have their auto type both resolved to “int *”, but with the second one the fact that it should be a pointer is made explicit.

This enhancement makes the feature more reliable when working with modern C++ auto declarations, reducing coding errors and improving developer productivity.

Modernized Options Dialog Interface

The Visual Assist Options dialog has been completely rebuilt with a modern UI framework, moving away from the legacy Win32 interface theme. This modernization represents the first step in a comprehensive UI refresh that will extend to other Visual Assist components in future releases.

Visual Assist 2025.3 updates the look and feel of the options dialog.

Improved Ray Line Highlighting Style

One of VA’s ways to showcase the current active line is achieved by using the “ray lines” highlighting style. Ray lines provide a subtle, non-intrusive way to highlight the current line using minimal horizontal lines without left/right borders.

New improved ray line highlighting style.

This option has been refined with better vertical spacing, addressing user feedback about the previous tight layout.

If you prefer using a different highlighting style, you can choose from the available options in the options dialog (Thin Frame, Background Color and Ray Lines). To choose your preferred highlighting style, navigate to Extensions — VAssistX —Visual Assist Options — Editor — Highlighting —”Highlight current line with:” 

Enhanced Replace Auto With Exact Type Accessibility

Building on the popular Replace Auto With Exact Type feature in previous releases, Visual Assist now makes this functionality more accessible via the right click menu or automatically via typing  the auto keyword.

Use Quick Info menu or right click on Auto.

Bug Fixes

For bug fixes and general improvements, the most critical update is the restoration of shader syntax coloring support in Visual Studio 17.12.0 and newer versions, addressing multiple related issues with code formatting and syntax highlighting in shader files across VS 2019 and 2022.

Additionally, there are significant performance improvements for Unreal Engine projects, specifically enhanced responsiveness of quick actions and refactoring menus. The release also includes fixes for HLSL file formatting and improved navigation performance for MAUI base classes.

The following list summarizes the most important bugs addressed in this release:

  • Fix for code formatting not working in shader files in VS 2019+
  • Fix for syntax coloring not working in shader files in VS 2022
  • Restored shader syntax coloring support in Visual Studio 17.12.0 and newer
  • Improved responsiveness of quick actions and refactoring menu in Unreal Engine projects
  • Fixed inconsistent filter control display in initial Find References results
  • Improved performance when navigating from MAUI base classes using Go To Related
  • Resolved formatting issues in HLSL files when shader support is enabled in Visual Studio 2019 and 2022

Availability & Feedback

This release was made generally available on June 30th and can be downloaded via the downloads page. As always, we appreciate feedback, especially on recently introduced features and the UI changes we introduced. 

Update now to an active license to utilize all the features and fixes in this release. And if you have any questions or encounter any issues, feel free to reach out to support@ewholetomato.com.

The post Visual Assist 2025.3 release post first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/visual-assist-2025-3-release-post/feed/ 0 4262
C++ Modules: What it promises and reasons to remain skeptical https://www.wholetomato.com/blog/c-modules-what-it-promises-and-reasons-to-remain-skeptical/ https://www.wholetomato.com/blog/c-modules-what-it-promises-and-reasons-to-remain-skeptical/#respond Fri, 18 Apr 2025 09:55:32 +0000 https://www.wholetomato.com/blog/?p=4158 Introduction C++ has never been afraid of complexity—but even for a language known for performance and control, the #include system has seemed like a bygone from another era. Modules in C++ were a long-awaited upgrade...

The post C++ Modules: What it promises and reasons to remain skeptical first appeared on Tomato Soup.

]]>
Introduction

C++ has never been afraid of complexity—but even for a language known for performance and control, the #include system has seemed like a bygone from another era.
Modules in C++ were a long-awaited upgrade aimed at cleaning up the mess of includes, speeding up build time, and making large-scale C++ development a bit less painful.

Standardized in C++20 and expanded in C++23, modules promise big gains in compile times. But as of 2025, they’re still not as widely adopted in most teams’ toolchains. Some developers are diving in and seeing real benefits. Others are holding back, citing spotty compiler support, tricky build integration, and the reluctant to face the learning curve that comes with any paradigm shift.

This post isn’t about selling you on the latest trend or convention—it’s a practical look at what C++ modules actually offer today, where the limitations still lie, and which cases it makes sense to adopt them. Decide for yourself later on.

A Quick Primer on C++ Modules

If you’ve worked with C++ for more than five minutes, you’ve dealt with header files. They’re powerful, but can also add noise: full of macros, guard clauses, and redundant includes that slow down compilation and make dependency tracking a chore. Modules were introduced in order to alleviate some of these issues.

At a high level, C++ modules replace the traditional preprocessor-based #include model with a cleaner, more structured system. Instead of copy-pasting code into translation units, modules compile once, then import—reducing repeated parsing and giving compilers more context to optimize builds.

How C++ Modules Work

A module interface is a standalone file—usually with the .ixx extension—that declares what’s available to other parts of your program. You can then import this module in other files using the import keyword (just like how it works Python), bypassing the need for header files entirely.
Behind the scenes, the compiler builds and caches the module interface, so future builds can skip reprocessing its contents—saving time and keeping things tidy.

Timeline at a Glance

  • C++20, officially published in December 2020, introduced official module support, though early compiler implementations were partial.
  • C++23, released in February 2023, expanded the spec, offering better support for features like module partitions and header unit compatibility.
  • Toolchains like Clang, MSVC, and GCC continue to evolve their support—but as of 2025, full interoperability is still a work in progress.

C++ module adoption timeline

Arguments for Adopting C++ Modules

If you’ve ever watched a massive C++ project crawl through compilation—or spent hours untangling a web of includes and macros—then the case for modules probably sounds pretty appealing. Here’s where they shine.

Improved Build Times and Scalability

Traditional C++ compiles every translation unit independently, parsing the same headers repeatedly across your codebase. That’s a lot of duplicated effort.
With modules, compilers can parse once and cache the results (just like how Visual Assist does it!). Module interfaces are precompiled and reused, cutting down redundant parsing.
On large projects, this can lead to significant reductions in full build and incremental compile times, especially when combined with modern build systems that understand modules.
This isn’t just theoretical—early adopters have seen real gains when porting to modules, particularly in libraries with thousands of files and deep dependency chains.

Cleaner Dependencies

Modules bring much-needed structure to C++. They reduce reliance on preprocessor directives and eliminate include guards, forward declarations, and subtle header-only bugs. In fact, they encourage you to think more clearly about what should be exposed and what should stay private.
Since you explicitly export only what’s needed, modules help enforce encapsulation, making APIs easier to maintain and less prone to unexpected breakage.

Improved IDE and Tooling Support

While not all editors are fully up to speed yet, modern IDEs and compilers are catching up. Visual Studio, Clang-based tools, and even some lightweight editors are beginning to provide meaningful module-aware features—like faster IntelliSense, smarter indexing, and fewer false-positive diagnostics.
Once your toolchain supports modules well, you’ll notice a smoother developer experience, particularly when working in large codebases.

Modernization and Future-Proofing

Adopting modules isn’t just about shaving off build minutes—it’s about aligning with the future direction of the language. As more modern C++ features lean into modules (like std::mdspan in C++23), developers who adopt early will be better positioned to take advantage of new capabilities.
Modules are also a gateway to cleaner build systems, more granular dependency management, and even more secure code, thanks to their ability to restrict symbol visibility and reduce accidental API exposure.

Industry Trends and Early Adoption

While modules haven’t reached critical mass yet, they are gaining traction. Library developers and performance-focused teams are leading the way, especially those building SDKs, game engines, or systems software where build time is a bottleneck.
We’ve also seen big names like Microsoft experiment with module adoption in parts of their standard library implementation, and some open-source projects have already migrated small parts of their code to test the waters.

Why you may want to delay adopting C++ Modules (for now)

For all the promise that C++ modules bring, real-world adoption is still, well… cautious. Developers aren’t exactly lining up to refactor their entire codebase just yet — and there are good reasons why.

Not much incentive to adopt

Even in greenfield projects, introducing modules comes with a learning curve. But in legacy codebases? Migration can be daunting. You’ll need to rethink your header structure, untangle tight coupling, and manage new build system dependencies — not to mention retraining your team. And then there’s the question of compatibility: modules don’t play nicely with everything, particularly if you rely heavily on macros, conditional compilation, or platform-specific headers.


In other words, this isn’t a weekend refactor — and for many teams, the payoff doesn’t yet outweigh the cost and it would make more sense to use modules on new projects instead.

Tooling Inconsistencies and Fragmentation

Ask any developer who’s attempted to go modular: Which compiler are you using? matters more than it should. While support for modules exists in Clang, MSVC, and GCC, it’s not uniform — and version-specific quirks can introduce frustrating inconsistencies.


Build system support is also in flux. While CMake has added module support, it still feels experimental, especially for complex project setups or cross-platform builds. Other systems like Bazel or custom build pipelines require more glue code than most teams want to maintain.
In short: the tooling isn’t fully there yet — especially if you’re not using the absolute latest compiler versions.

Lack of Ecosystem Maturity

Even if your toolchain is up to date, the broader ecosystem might not be. Most third-party libraries aren’t shipping with module interface units, which means you’re either stuck writing your own wrappers or falling back to #include anyway. That limits the benefits of going modular in mixed environments — which, let’s face it, is most environments. Until popular libraries (Boost, Qt, etc.) begin offering reliable module support, most teams can’t go all-in without making sacrifices.

Limited Real-World Case Studies

There’s still a lack of detailed success stories when it comes to large-scale adoption. Some early adopters have shared benchmarks or migration notes, but most real-world examples are small experiments, not full production shifts.


Without broader case studies to learn from, many teams are taking a “wait and see” approach — watching how others fare before diving in themselves.

Stability Concerns

The C++ modules ecosystem is still evolving. Compiler behavior can change between minor versions, module-related bugs pop up in tooling updates, and build system support continues to shift.


This kind of churn makes it hard to commit to modules in production, especially in enterprise environments where stability is everything.

Situations Where Modules Might (or Might Not) Be Worth It

C++ modules aren’t an all-or-nothing deal — and thankfully, you don’t have to rip out every #include to start using them. Depending on your project, team size, and tooling setup, modules might either be a smart optimization or an unnecessary complexity. Let’s break it down.

 When Modules Make Sense

  • You’re starting a new codebase (especially at scale)
    Greenfield projects are the perfect playground for modern C++. If you’re building a large system from scratch, modules let you start clean — without legacy header baggage. Organizing your code as modular interfaces from the beginning can make maintenance, scalability, and onboarding much easier.
  • You maintain a modern toolchain
    If your team is already using the latest versions of GCC, Clang, or MSVC — and you’re comfortable updating your toolchain regularly — you’re in a better position to benefit from the improved compile times and structure that modules offer.
  • You’re building reusable libraries
    Modules are a natural fit for API design. If you’re developing shared components, SDKs, or internal packages, defining module interfaces can help enforce encapsulation and create cleaner, more predictable dependencies.
  • You have a strong DevOps/infrastructure team
    Getting modules to play nicely with CMake or your CI pipeline isn’t always straightforward. Teams with dedicated infrastructure support can manage the learning curve more effectively and are better equipped to deal with compiler quirks or build system tweaks.

When You Might Want to Hold Off

  • You’re working with a legacy codebase
    Old code doesn’t like change. Migrating headers, untangling circular dependencies, and retrofitting module maps can eat up time with little visible payoff — especially if you’re also juggling deadlines.
  • Your build system isn’t ready
    If your project relies on complex or deeply customized builds, introducing modules can introduce instability rather than speed. Even popular tools like CMake are still maturing their module support, and not all workflows are smooth yet.
  • You rely heavily on third-party libraries
    Until widely used libraries start shipping module interface units, your modules will live in an awkward coexistence with #include. This kind of hybrid environment can be frustrating and lead to confusing bugs or duplicated efforts.
  • Your team is small or early-stage
    If you’re moving fast and shipping often, taking time to restructure code for modules might not be worth the effort right now. Simplicity usually wins in the early days — and headers still work just fine.
  • Community Perspectives and Industry Signals
    While C++ modules continue to mature, much of their momentum—and hesitation—comes from the wider community: compiler vendors, standards committees, open-source maintainers, and developers who’ve dipped their toes in and reported back. Let’s explore what the broader C++ ecosystem is saying about modules in 2025.

Summary: Key Considerations Before Making a Choice

As we wrap up, let’s briefly recap the main points and outline what you should consider before diving into C++ modules:

Pros of Adopting C++ Modules

  • Improved build times: If you’re working with large codebases, the performance gains from reduced redundant parsing can be significant.
  • Cleaner dependencies: Modules eliminate many of the headaches associated with header file inclusion, such as tangled macros and circular dependencies.
  • Tooling support: While still evolving, most major compilers (MSVC, Clang, GCC) are heading in the right direction, and IDE support is growing.

Cons of Adopting C++ Modules

  • Fragmented tooling: Support across compilers and build systems is still inconsistent. If you’re using a particular toolchain, check for full compatibility before diving in.
  • Migration cost: Moving an existing project to modules involves significant changes in build systems, dependencies, and possibly code itself.
  • Lack of third-party support: If your project relies heavily on external libraries, check whether they support modules, or be prepared for some custom workarounds.
  • Limited case studies: The adoption rate of modules, especially in large-scale real-world projects, is still low, meaning the learning curve could be steeper than expected.

When Should You Adopt C++ Modules?

  • New codebases or projects: If you’re starting fresh or adding new features to a project, adopting modules early could save you time in the long run.
  • Open-source libraries: If you’re maintaining a widely-used library, moving to modules could lead to performance improvements that benefit the community.
  • Legacy codebases: If you’re dealing with a large, established project, the effort to migrate to modules may not be justified unless you have the resources to support it.

Ultimately, adopting C++ modules in 2025 depends on your project’s size, complexity, and long-term goals. It may be worth experimenting with modules on smaller, isolated parts of your project to gauge their potential before committing to a full-scale migration.

Add more support for modules in C++

If you’re on the fence about using C++ because of the relatively limited tooling available for it, consider adding the Visual Assist plugin for Visual Studio. In a recent update, it added recognition when declaring new modules into your project. This added support makes C++ modules easier to work with with the navigation and auto suggest features working as you’d expect.

The post C++ Modules: What it promises and reasons to remain skeptical first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/c-modules-what-it-promises-and-reasons-to-remain-skeptical/feed/ 0 4158
Visual Assist 2025.1 release post https://www.wholetomato.com/blog/visual-assist-2025-1-release-post/ https://www.wholetomato.com/blog/visual-assist-2025-1-release-post/#respond Mon, 31 Mar 2025 15:52:53 +0000 https://www.wholetomato.com/blog/?p=4133 VA 2025.1 enhances usability with smarter navigation, better C++ module support, and more flexible refactoring options. The updated first-run dialog, configurable test snippets, and a refreshed UI improve the overall experience. Additionally, several key fixes...

The post Visual Assist 2025.1 release post first appeared on Tomato Soup.

]]>
VA 2025.1 enhances usability with smarter navigation, better C++ module support, and more flexible refactoring options. The updated first-run dialog, configurable test snippets, and a refreshed UI improve the overall experience. Additionally, several key fixes address navigation issues, assignment suggestions, and UI inconsistencies, ensuring a more stable and efficient development environment.

Download the release now from our website download page.

VA Integration modes: Updated First Run Dialog

In VA 2024.9, new integration modes were added to allow users to personalize their experience with how Visual Assist features were presented and accessed. You can visit the integration mode page to learn more about available integration modes. This dialog was initially shown for fresh installs only. 

VA 2025.1 makes the dialog appear for every user who has not previously encountered it, regardless of whether they are installing Visual Assist for the first time or have updated from an earlier version.

The first run dialog allows users to pick VA integration modes.

Option to exclude symbols in GoTo and List Methods navigation

This small tweak adds an option to skip selecting symbols after you navigate to it. In that way, you can immediately start typing before the symbol, or you would be able to keep your current selection even after jumping to different parts of the code.

This currently works for VA’s Go To and List Methods in Current File (Alt + M). Access the new option via the toolbar.

Open the options dialog to select symbol selection behavior.

Specify access level on Extract Method

VA introduces a new option that allows developers to specify the access level (public, private, or protected) directly when using the Extract Method refactoring tool.

Specify the visibility of methods obtained via Extract Methods using the new options.

This streamlines the refactoring process by providing an immediate choice of access level for the new method being created from the selected block of code. Previously, after extracting a method, the default access level was applied (usually private), and any changes to this required manual adjustment. 

With this update, developers can set the desired access level in the initial step of the extraction, ensuring better code organization and encapsulation from the outset.

New features added for C++ modules when importing

When declaring new modules into your project, VA will recognize what you are trying to do and core navigation and features will work accordingly. This includes autocompletion prompts, adding includes, finding references, and other pertinent navigations.

C++ modules were added in C++ 20 to help improve the compilation times and the overall build performance of C++ programs. Modules provided a modern alternative to traditional header files and includes by allowing programmers to define interfaces that are compiled separately and imported as needed. 

This reduces the need to include headers and recompile code unnecessarily, which can significantly speed up the build process. 

Modules in C++ are fairly new and the committee is still pushing for mass adoption. But whether you’re an early adopter of C++ modules or not, this VA update should help you find available modules should the need arise.

VA now parses C++ modules, enabling core navigations and features.

Support for *.IXX module files.

This change allows VA to parse and understand the new modular structure introduced with C++20. This means that developers can now work with module interface files (.ixx) directly within the Visual Assist environment, benefiting from features like syntax highlighting, code navigation, and intelligent code completion that were previously limited to traditional header and source files.

For instance, if you had symbols declared in an .IXX file, VA now properly parses them and navigation features such as Go To will now work properly.

Configurable snippet base for unit test generation

There are new configuration options available for Unit Test Generation that allow developers to customize the boilerplate code that is automatically generated when creating unit tests. 

The unit test generation feature was first introduced in VA 2024.9 and added a new feature to create a boilerplate that follows the Google Test framework. This creates a new test file, prepopulated with placeholders following the test structure to make it more convenient to users.

VA 2025.1 upgrades this new feature with the flexibility of specifying preferences and settings that align with their project’s requirements or personal coding standards.

New modernized tomato icon 

Our loveable tomato icon has been given a fresher look for the new development year! This was primarily done to improve user experience and accessibility. This change was made to increase contrast, and make VA’s features more distinguishable so users can utilize it more effectively in the IDE.

new whole tomato visual assist logo 2025

Updated tomato icon. Will be rolled out for every platform!

We’ve also taken the opportunity to maintain a consistent look and feel across all instances of our tomato icon. This update ensures that they appear correctly and uniformly across all platforms.

Excluding C# files from parsing via “settings.json” file.

VA 2025.1 builds upon a similar functionality introduced in VA 2022.4 where an option to consider configuration instructions outlined in a .json file can be used to skip unnecessary parsing when building solutions. 

This new feature does something similar, but for C# instead. The feature allows developers to specify which C# files should be excluded from parsing by Visual Assist through a configuration in a .json file.

This is particularly useful for developers working cross-platform as this tells Visual Studio and Visual Assist to “open a file but do not parse anything else apart from a specific part.” 

So even if users have dozens of non Visual Studio files in one directory, you can specify which files are part of the project you are trying to open. (Otherwise, VS and VA will try to parse the whole directory—very resource intensive and time consuming.)

Bug Fixes

For bug fixes and general improvements, most of them were based on user feedback and reports. The most notable of these updates are fixes for a crash happening when logging is enabled while debugging, and a hang involving the Go To features. There was also a pesky bug related to having two-monitor setups that is now fixed. 

The following list summarizes the most important bugs addressed in this release:

Fix for flashing in the Find References results window on start or when changing monitors.

  • Fix for Encapsulate field in C#.
  • Fix for VA Hashtags not being suggested.
  • Fix for assignment suggestions not appearing in some cases.
  • Fix for dialog hang that could sometimes happen when using Goto.
  • Increased the display limit for Move Method to Base Class to 12 base classes (from 6).
  • Fix for Move Method to Base Class sometimes not displaying the base class list to move to.
  • Fix for tip of the day links opening in Internet Explorer rather than the default browser.
  • Fix for a crash that could sometimes happen when troubleshoot logging is enabled.
  • Fix for attributes displaying in a difficult to read color when in dark mode.

Availability & Feedback

This release was made generally available on March 28th and can be downloaded via the downloads page. As always, we appreciate feedback, especially on recently introduced features and the UI changes we introduced. Thank you for helping us create a better experience for all our users.

Update now to an active version to utilize all the features and fixes in this release. And if you have any questions or encounter any issues, feel free to reach out to support@ewholetomato.com.

The post Visual Assist 2025.1 release post first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/visual-assist-2025-1-release-post/feed/ 0 4133
Test Driven-Development and UI/UX Design: A Practical Guide [Webinar Recap] https://www.wholetomato.com/blog/tdd-unit-testing-ui-guide/ https://www.wholetomato.com/blog/tdd-unit-testing-ui-guide/#respond Sun, 22 Dec 2024 14:12:54 +0000 https://www.wholetomato.com/blog/?p=4029 Don’t you wish your code came with an undo button for every mistake? So do all developers who accidentally pushed a bug into production! But we got the next best thing: Unit testing. This webinar...

The post Test Driven-Development and UI/UX Design: A Practical Guide [Webinar Recap] first appeared on Tomato Soup.

]]>
Don’t you wish your code came with an undo button for every mistake? So do all developers who accidentally pushed a bug into production!

But we got the next best thing: Unit testing. This webinar will show you how to stop breaking your codebase (and your spirit) by writing tests that catch errors before they escape into the wild. Perfect for developers who know they should test but don’t know how—or why.

What You’ll Learn:

  • The differences between two schools of TDD and when to use them.
  • How to implement CI pipelines and automate your test execution.
  • Practical techniques for leveraging static analysis tools and code profiling.
  • Real-world case studies that highlight successful approaches to refactoring and performance optimization.

In this webinar, our experts shared their best practices for developing high-quality C++ code, offering valuable insights to apply in your projects.

This webinar features insights from experts in software design and development, covering practical applications and real-world scenarios to help you streamline your workflows.

This webinar has concluded. Scroll down to watch the replay and review the highlights.

Webinar Replay

Webinar Highlights

Introduction

0:19-1:35: About Nuno: product manager for Visual Assist, clean code enthusiast, contact info shared, alongside mission of Visual Assist and upcoming new version announcement.

Message and Story

1:40-5:12: Importance of programmers writing good quality software and Nuno’s experience with different software development approaches (design thinking, waterfall, agile).

Test-Driven Development Overview

5:12-8:10: Discovery of test-driven development (TDD) and its impact on software quality. Explanation of TDD and the Red-Green-Refactor cycle. Importance of small increments, immediate feedback, and other TDD benefits.

Practical Exercise Setup

8:17-10:09: Overview of the Mars Rover exercise, rules, and references.
10:09-11:00: Visual Studio 2022 setup for the Mars Rover project (source files and test project creation).

First Test Case

11:00-12:08: Writing the first test: Initial position at (0, 0), facing north.
12:08-13:11: Creating the Rover class and implementing execute() to return an empty string initially.
13:11-16:16: Making the test pass by returning the expected position and direction.

Second Test Case

16:16-18:15: Writing the second test: Rotating right from north to east.
18:15-20:09: Updating Rover to handle the “right rotation” command and making the test pass.

Refactoring and Patterns

20:09-20:59: Recognizing patterns in the test code and introducing Google Test fixtures for code reuse.
50:06-52:11: Introducing and implementing a current position variable. Writing and running tests to confirm functionality after the changes.
52:11-53:28: Extending functionality to the left method and replicating the test-driven approach used for the right method.
54:00-55:18: Cleaning up and optimizing the code after successful test results, ensuring all tests remain green.
56:00-56:48: Summary of the refactoring process and demonstration of the final Rover and Direction class setup.

QnA

[56:48–59:02]
Introduction to the Q&A session with Nuno Castro and Ian Barker. The discussion opens with strategies for writing tests for projects without existing tests. Suggestions include starting with end-to-end tests and gradually adding component-specific tests during future changes.

GUI Tools, A/B Testing, and Metrics

[59:02–1:03:07]
Overview of GUI testing tools like SmartBear’s TestComplete and their use in desktop and web testing. The discussion transitions into A/B testing, explaining its purpose and real-world examples (e.g., Coca-Cola product testing). The importance of metrics to gauge feature usage before redesign or development is also highlighted.

Agile Methodologies and Encouragement for TDD

[1:03:07–1:06:50]
Reflection on Agile methodologies, balancing speed with system stability, and evolving approaches such as Facebook’s shift from “move fast and break things” to prioritizing reliability. The session concludes with encouragement to adopt Test-Driven Development (TDD) and a nod to the value of unedited coding demos to showcase realistic problem-solving.

Self-Development, Testing, and TDD Approaches

[1:10:01–1:13:36]
Introduction to self-development as both a science and an art. Discussion includes testing strategies to ensure business logic isn’t broken, addressing overfitting in tests, and balancing test coverage with real-world solutions. User stories are highlighted as a foundation for design, followed by a comparison of the Chicago and London schools of TDD.

Design, User Experience, and Business Logic

[1:13:36–1:17:01]
Emphasis on designing user interfaces first and iterating on user experience challenges. The discussion incorporates Don Norman’s insight that user errors often indicate interface design issues. It concludes with balancing business logic with test coverage in TDD.

Closing

[1:17:01–1:18:00]
The importance of prioritizing timely application releases over perfectionism is discussed. The webinar ends with closing remarks, thanks to participants, replay information, and a final farewell.

The post Test Driven-Development and UI/UX Design: A Practical Guide [Webinar Recap] first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/tdd-unit-testing-ui-guide/feed/ 0 4029
How to Query File Attributes 50x faster on Windows https://www.wholetomato.com/blog/how-to-query-file-attributes-50x-faster-on-windows/ https://www.wholetomato.com/blog/how-to-query-file-attributes-50x-faster-on-windows/#respond Thu, 14 Nov 2024 15:52:55 +0000 https://www.wholetomato.com/blog/?p=4010 Imagine you’re developing a tool that needs to scan for file changes across thousands of project files. Retrieving file attributes efficiently becomes critical for such scenarios. In this article, I’ll demonstrate a technique to get...

The post How to Query File Attributes 50x faster on Windows first appeared on Tomato Soup.

]]>
Imagine you’re developing a tool that needs to scan for file changes across thousands of project files. Retrieving file attributes efficiently becomes critical for such scenarios. In this article, I’ll demonstrate a technique to get file attributes that can achieve a surprising speedup of over 50+ times compared to standard Windows methods.

Let’s dive in and explore how we can achieve this.

This is a blog post made in collaboration with Bartlomiej Filipek from C++ stories. You can visit his blog here.

The inspiration

The inspiration for this article came from a recent update for Visual Assist – a tool that heavily improves Visual Studio experience and productivity for C# and C++ developers.

In one of their blog post, they shared:

The initial parse is 10..15x faster!

What’s New in Visual Assist 2024—Featuring lightning fast parser performance [Webinar] – Tomato Soup

After watching the webinar, I noticed some details about efficiently getting file attributes and I decided to give it a try on my machine. In other words I tried to recreate their results.

Disclaimer: Idera, the company behind Visual Assist, helped me write this post and sponsored it.

Understanding File Attribute Retrieval Methods on Windows

On Windows, there are at least a few options to check for a file change:

  • FindFirstFile[EX] – with Basic, Standard and LargeFetch options
  • GetFileAttributesEx
  • std::filesystem
  • GetFileInformationByHandleEx

Below, you can see some primary usage of each approach:

FindFirstFileEx

FindFirstFileEx is a Windows API function that allows for efficient searching of directories. It retrieves information about files that match a specified file name pattern. The function can be used with different information levels, such as FindExInfoBasic and FindExInfoStandard, to control the amount of file information fetched.

WIN32_FIND_DATA findFileData;
HANDLE hFind = FindFirstFileEx((directory + "\\*").c_str(), FindExInfoBasic, &findFileData, FindExSearchNameMatch, NULL, 0);

if (hFind != INVALID_HANDLE_VALUE) {
    do {
        // Process file information
    } while (FindNextFile(hFind, &findFileData) != 0);
    FindClose(hFind);
}

Additionally you can also pass FIND_FIRST_EX_LARGE_FETCH as the additional flag to indicate that the function should use a larger buffer which might bring some extra performance.

GetFileAttributesEx

GetFileAttributesEx is another Windows API function that retrieves file attributes for a specified file or directory. Unlike FindFirstFileEx, which is used for searching and listing files, GetFileAttributesEx is typically used for retrieving attributes of a single file or directory.

WIN32_FILE_ATTRIBUTE_DATA fileAttributeData;
if (GetFileAttributesEx((directory + "\\" + fileName).c_str(), GetFileExInfoStandard, &fileAttributeData)) {
    // Process file attributes
}

GetFileInformationByHandleEx

GetFileInformationByHandleEx is a low level routine that might be tricky to use, but gives us more control over the iteration. The main idea is to get a lerge buffer of data and read it on the application side, rather than rely on sometimes costly kernel/system calls.

Assuming you have a file open, which is a directory, you can iterate over its children in the following way:

while (true) {
    if (!GetFileInformationByHandleEx(
        hDir,
        FileFullDirectoryInfo,
        pInfo,
        sizeof(buffer))) {
        DWORD error = GetLastError();
        if (error == ERROR_NO_MORE_FILES) {
            break;
        }
        else {
            std::wcerr << L"GetFileInformationByHandleEx failed (" << error << L")\n";
            break;
        }
    }

    do {
        if (!(pInfo->FileAttributes & FILE_ATTRIBUTE_DIRECTORY)) {
            FileInfo fileInfo;
            fileInfo.fileName = std::wstring(pInfo->FileName, pInfo->FileNameLength / sizeof(WCHAR));
            FILETIME ft{};
            ft.dwLowDateTime = pInfo->LastWriteTime.LowPart;
            ft.dwHighDateTime = pInfo->LastWriteTime.HighPart;
            fileInfo.lastWriteTime = ft;
            files.push_back(fileInfo);
        }
        pInfo = reinterpret_cast<FILE_FULL_DIR_INFO*>(
            reinterpret_cast<BYTE*>(pInfo) + pInfo->NextEntryOffset);
    } while (pInfo->NextEntryOffset != 0);
}

std::filesystem

Introduced in C++17, the std::filesystem library provides a modern and portable way to interact with the file system. It includes functions for file attribute retrieval, directory iteration, and other common file system operations.

for (const auto& entry : fs::directory_iterator(directory)) {
    if (entry.is_regular_file()) {
        // Process file attributes
        auto ftime = fs:last_write_time(entry);
        ...
    }
}

The Benchmark

To evaluate the performance of different file attribute retrieval methods, I developed a small benchmark. This application measures the time taken by each method to retrieve file attributes for N number of files in a specified directory.

Here’s a rough overview of the code:

The FileInfo struct stores the file name and last write time.

struct FileInfo {
    std::wstring fileName;
    std::variant<FILETIME, std::filesystem::file_time_type> lastWriteTime;
};

Each retrieval technique will have to go over a directory and build a vector of FileInfo objects.

BenchmarkFindFirstFileEx

void BenchmarkFindFirstFileEx(const std::string& directory, 	
                              std::vector<FileInfo>& files, 
                              FINDEX_INFO_LEVELS infoLevel) 
{
   WIN32_FIND_DATA findFileData;
   HANDLE hFind = FindFirstFileEx((directory + "\\*").c_str(),
                                   infoLevel, 
                                   &findFileData, 
                                   FindExSearchNameMatch, NULL, 0);

   if (hFind == INVALID_HANDLE_VALUE) {
       std::cerr << "FindFirstFileEx failed (" 
                 << GetLastError() << ")\n";
       return;
   }

   do {
       if (!(findFileData.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY)) {
           FileInfo fileInfo;
           fileInfo.fileName = findFileData.cFileName;
           fileInfo.lastWriteTime = findFileData.ftLastWriteTime;
           files.push_back(fileInfo);
       }
   } while (FindNextFile(hFind, &findFileData) != 0);

   FindClose(hFind);
}

BenchmarkGetFileAttributesEx

void BenchmarkGetFileAttributesEx(const std::string& directory,
                                  std::vector<FileInfo>& files) 
{
   WIN32_FIND_DATA findFileData;
   HANDLE hFind = FindFirstFile((directory + "\\*").c_str(),
                                &findFileData);

   if (hFind == INVALID_HANDLE_VALUE) {
       std::cerr << "FindFirstFile failed (" 
                 << GetLastError() << ")\n";
       return;
   }

   do {
       if (!(findFileData.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY)) {
           WIN32_FILE_ATTRIBUTE_DATA fileAttributeData;
           if (GetFileAttributesEx((directory + "\\" + findFileData.cFileName).c_str(), GetFileExInfoStandard, &fileAttributeData)) {
               FileInfo fileInfo;
               fileInfo.fileName = findFileData.cFileName;
               fileInfo.lastWriteTime = fileAttributeData.ftLastWriteTime;
               files.push_back(fileInfo);
           }
       }
   } while (FindNextFile(hFind, &findFileData) != 0);

   FindClose(hFind);
}

BenchmarkStdFilesystem

And the last one, the most portable technique:

void BenchmarkStdFilesystem(const std::string& directory, 
                            std::vector<FileInfo>& files) 
{
    for (const auto& entry : std::filesystem::directory_iterator(directory)) {
        if (entry.is_regular_file()) {
            FileInfo fileInfo;
            fileInfo.fileName = entry.path().filename().string();
            FILETIME ft{};
            ft.dwLowDateTime = pInfo->LastWriteTime.LowPart;
            ft.dwHighDateTime = pInfo->LastWriteTime.HighPart;
            fileInfo.lastWriteTime = ft;
            files.push_back(fileInfo);
        }
    }
}

BenchmarkGetFileInformationByHandleEx

void BenchmarkGetFileInformationByHandleEx(const std::wstring& directory, std::vector<FileInfo>& files) {
    HANDLE hDir = CreateFileW(
        directory.c_str(),
        GENERIC_READ,
        FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE,
        NULL,
        OPEN_EXISTING,
        FILE_FLAG_BACKUP_SEMANTICS,
        NULL
    );

    if (hDir == INVALID_HANDLE_VALUE) {
        std::wcerr << L"CreateFile failed (" << GetLastError() << L")\n";
        return;
    }

    constexpr DWORD BufferSize = 64 * 1024;
    uint8_t buffer[BufferSize];
    FILE_FULL_DIR_INFO* pInfo = reinterpret_cast<FILE_FULL_DIR_INFO*>(buffer);

    while (true) {
        if (!GetFileInformationByHandleEx(
            hDir,
            FileFullDirectoryInfo,
            pInfo,
            sizeof(buffer))) {
            DWORD error = GetLastError();
            if (error == ERROR_NO_MORE_FILES) {
                break;
            }
            else {
                std::wcerr << L"GetFileInformationByHandleEx failed (" << error << L")\n";
                break;
            }
        }

        do {
            if (!(pInfo->FileAttributes & FILE_ATTRIBUTE_DIRECTORY)) {
                FileInfo fileInfo;
                fileInfo.fileName = std::wstring(pInfo->FileName, pInfo->FileNameLength / sizeof(WCHAR));
                FILETIME ft{};
                ft.dwLowDateTime = pInfo->LastWriteTime.LowPart;
                ft.dwHighDateTime = pInfo->LastWriteTime.HighPart;
                fileInfo.lastWriteTime = ft;
                files.push_back(fileInfo);
            }
            pInfo = reinterpret_cast<FILE_FULL_DIR_INFO*>(
                reinterpret_cast<BYTE*>(pInfo) + pInfo->NextEntryOffset);
        } while (pInfo->NextEntryOffset != 0);
    }

    CloseHandle(hDir);
}

The Main Function

The main function sets up the benchmarking environment, runs the benchmarks, and prints the results.

std::wstring directory = argv[1];
const auto arg2 = argc > 2 ? std::wstring_view(argv[2]) : std::wstring_view{};

std::vector<std::pair<std::wstring, std::function<void(std::vector<FileInfo>&)>>> benchmarks = {
    {L"FindFirstFileEx (Basic)", [&](std::vector<FileInfo>& files) {
        BenchmarkFindFirstFileEx(directory, files, FindExInfoBasic, 0);
    }},
    {L"FindFirstFileEx (Standard)", [&](std::vector<FileInfo>& files) {
        BenchmarkFindFirstFileEx(directory, files, FindExInfoStandard, 0);
    }},
    {L"FindFirstFileEx (Large Fetch)", [&](std::vector<FileInfo>& files) {	BenchmarkFindFirstFileEx(directory, files, FindExInfoStandard, FIND_FIRST_EX_LARGE_FETCH);
    }},
    {L"GetFileAttributesEx", [&](std::vector<FileInfo>& files) {
        BenchmarkGetFileAttributesEx(directory, files);
    }},
    {L"std::filesystem", [&](std::vector<FileInfo>& files) {
        BenchmarkStdFilesystem(directory, files);
        }},
    {L"GetFileInformationByHandleEx", [&](std::vector<FileInfo>& files) {
        BenchmarkGetFileInformationByHandleEx(directory, files);
    }}
};

std::vector<std::pair<std::wstring, double>> results;

for (const auto& benchmark : benchmarks) {
    std::vector<FileInfo> files;
    files.reserve(2000); // Reserve space outside the timing measurement

    auto start = std::chrono::high_resolution_clock::now();
    benchmark.second(files);
    auto end = std::chrono::high_resolution_clock::now();

    std::chrono::duration<double> elapsed = end - start;
    results.emplace_back(benchmark.first, elapsed.count());
}

PrintResultsTable(results);

Performance Results

To measure the performance of each file attribute retrieval method, I executed benchmarks on a directory containing 1000, 2000 or 5000 random text files. The tests were performed on a laptop equipped with an Intel i7 4720HQ CPU and an SSD. I measured the time taken by each method and compared the results to determine the fastest approach.

Each test run consisted of two executions: the first with uncached file attributes and the second likely benefiting from system-level caching.

The speedup factor is the factor of the current result compared to the slowest technique in a given run.

1000 files:

Method                         Time (seconds)       Speedup Factor
FindFirstFileEx (Basic)        0.0014831000         162.868
FindFirstFileEx (Standard)     0.0014817000         163.022
FindFirstFileEx (Large Fetch)  0.0011792000         204.842
GetFileAttributesEx            0.2415497000         1.000
std::filesystem                0.0609313000         3.964
GetFileInformationByHandleEx   0.0044168000         54.689

// second run:
Method                         Time (seconds)       Speedup Factor
FindFirstFileEx (Basic)        0.0013805000         44.947
FindFirstFileEx (Standard)     0.0011310000         54.863
FindFirstFileEx (Large Fetch)  0.0009071000         68.404
GetFileAttributesEx            0.0616772000         1.006
std::filesystem                0.0620496000         1.000
GetFileInformationByHandleEx   0.0025246000         24.578

Directory with 2000 files:

Method                         Time (seconds)       Speedup Factor
FindFirstFileEx (Basic)        0.0014455000         150.287
FindFirstFileEx (Standard)     0.0015029000         144.547
FindFirstFileEx (Large Fetch)  0.0012086000         179.745
GetFileAttributesEx            0.2172402000         1.000
std::filesystem                0.0609186000         3.566
GetFileInformationByHandleEx   0.0025069000         86.657

Method                         Time (seconds)       Speedup Factor
FindFirstFileEx (Basic)        0.0012020000         50.908
FindFirstFileEx (Standard)     0.0011614000         52.688
FindFirstFileEx (Large Fetch)  0.0008887000         68.856
GetFileAttributesEx            0.0611920000         1.000
std::filesystem                0.0611760000         1.000
GetFileInformationByHandleEx   0.0025835000         23.686

Directory with 5000 random, small text files:

Method                         Time (seconds)       Speedup Factor
FindFirstFileEx (Basic)        0.0077623000         84.975
FindFirstFileEx (Standard)     0.0828258000         7.964
FindFirstFileEx (Large Fetch)  0.0144611000         45.612
GetFileAttributesEx            0.6595977000         1.000
std::filesystem                0.3022779000         2.182
GetFileInformationByHandleEx   0.0051569000         127.906

Method                         Time (seconds)       Speedup Factor
FindFirstFileEx (Basic)        0.0069814000         43.844
FindFirstFileEx (Standard)     0.0148472000         20.616
FindFirstFileEx (Large Fetch)  0.0140663000         21.761
GetFileAttributesEx            0.3060932000         1.000
std::filesystem                0.3011346000         1.016
GetFileInformationByHandleEx   0.0051614000         59.304

The results consistently showed that FindFirstFileEx with the Standard flag was the fastest method in uncached scenarios, offering speedups up to 129x compared to GetFileAttributesEx. However, in cached scenarios, FindFirstFileEx (Basic and Standard) achieved over 50x speedup improvements. The parameters for “Large Fetch” seems to increase the performance.

For the directory with 2000 files, FindFirstFileEx (Basic) demonstrated a speedup factor of over 179x in the first run and went down to 68 in the second run. In the directory with 5000 files, we can see that GetFileInformationByHandleEx takes crown and acheives 59x speedup, while other techniques reaches 43x max. Notably, std::filesystem performed on par with GetFileAttributesEx .

Further Techniques

Getting file attributes is only part of the story, and while important, they may contribute to only a small portion of the overall performance for the whole project. The Visual Assist team, who contributed to this article, improved their initial parse time performance by avoiding GetFileAttributes[Ex] using the same techniques as this article. But Visual Assist also improved performance through further techniques. My simple benchmark showed 50x speedups, but we cannot directly compare it with the final Visual Assist, as the tool does many more things with files.

The main item being optimised was the initial parse, where VA builds a symbol database when a project is opened for the first time. This involves parsing all code and all headers. They decided that it’s a reasonable assumption that headers won’t change while a project is being loaded, and so the file access is cached during the initial parse, avoiding the filesystem entirely. (Changes after a project has been parsed the first time are, of course, still caught.) The combination of switching to a much faster method for checking filetimes and then avoiding file IO completely contributed to the up-to-15-times-faster performance improvement they saw in version 2024.1 at the beginning of this year.

Read further details on their blog Visual Assist 2024.1 release post – January 2024 and Catching up with VA: Our most recent performance updates – Tomato Soup.

Summary

In the text, we went through a benchmark that compares several techniques for fetching file attributes. In short, it’s best to gather attributes at the same time as you iterate through the directory – using FindFirstFileEx or via GetFileInformationByHandleEx. So if you want to do this operation hundreds of times, it’s best to measure time and choose the best technique. What’s more, if you expect to have lots of files in a directory it’s good to check techniques offering larger buffers.

The benchmark also showed one feature: while C++17 and its filesystem library offer a robust and standardized way to work with files and directories, it can be limited in terms of performance. In many cases, if you need super optimal performance, you need to open the hood and work with the specific operating system API.

Back to you

  • Do you use std::filesystem for tasks involving hundreds of files?
  • Do you know other techniques that offer greater performance when working with files?

Share your comments below. And if you’re using C++, you can also download and try Visual Assist yourself for 30 days for free.

The post How to Query File Attributes 50x faster on Windows first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/how-to-query-file-attributes-50x-faster-on-windows/feed/ 0 4010
C++ versus Blueprints: Which should I use for Unreal Engine game development? https://www.wholetomato.com/blog/c-versus-blueprints-which-should-i-use-for-unreal-engine-game-development/ https://www.wholetomato.com/blog/c-versus-blueprints-which-should-i-use-for-unreal-engine-game-development/#respond Wed, 23 Oct 2024 13:49:33 +0000 https://www.wholetomato.com/blog/?p=3983 Introduction When programming game elements in Unreal, developers have two main options: develop using Unreal’s visual blueprint system or develop using the C++ language.  The Blueprint system in Unreal Engine is a powerful visual scripting...

The post C++ versus Blueprints: Which should I use for Unreal Engine game development? first appeared on Tomato Soup.

]]>
Introduction

When programming game elements in Unreal, developers have two main options: develop using Unreal’s visual blueprint system or develop using the C++ language. 

The Blueprint system in Unreal Engine is a powerful visual scripting tool designed to help developers create gameplay mechanics without needing to write traditional code. Introduced in Unreal Engine 4 to make game development more accessible to non-programmers, Blueprints enable users to build systems by dragging and dropping pre-built nodes, representing code functions. Some developers treat blueprints as the be-all and end-all for programming in Unreal…

…but on the other hand, we have those who advocate C++ and its ability to program almost anything in Unreal. It has performance, versatility, and arguably makes you a better designer because you can control almost every mechanic of the game you are developing. 

In this blog post, we discuss the differences between the two approaches and hopefully it will help more people understand that it’s not an either/or decision and the most effective utilization is to use them to complement each other. 

Getting started: How to install Unreal Engine and Visual Studio

Introduction to Unreal’s Blueprint System

According to Epic, the creators of the Unreal Engine, the Blueprint Visual Scripting system is a “complete gameplay scripting system based on the concept of using a node-based interface to create gameplay elements from within Unreal Editor.”

Before Blueprints, Unreal Engine used a scripting language called UnrealScript (used in Unreal Engine 3 and earlier). While powerful, it required traditional programming knowledge and didn’t cater to artists or designers (which arguably comprise a greater bulk of game development) who needed to iterate rapidly without diving into code.

The idea was to make game development more accessible to a wider range of creators, especially those who weren’t programmers.

Fast forward to the highly acclaimed Unreal Engine 4 which was released in 2014, Epic introduced it with the visual scripting system. The idea was to make game development more accessible to a wider range of creators, especially those who weren’t programmers. Blueprints allowed developers to visually connect logic, making scripting easier and more intuitive. It was essentially UnrealScript’s replacement, offering drag-and-drop functionality to build gameplay systems.

The latest updates in Unreal Engine 5 have taken blueprints one step further. Performance enhancements allow Blueprints to run more efficiently and closer to native C++ speeds, making them more suitable for complex projects. Furthermore, users now have the ability to nativize Blueprint code into C++, offering the best of both worlds by combining visual scripting ease with C++’s runtime performance.

Learn more: Unreal's Beginner's Guide to Blueprints

Quick explainer why C++ is used for Unreal Engine (and game dev)

The primary reason why C++ is used in Unreal development is the same reason why it’s used in game development in general—speed and performance. Additionally, as alluded to in the previous section, Unreal development is essentially programming that uses a lot of C++ macros that combine complex code into more easy-to-use bits.

Generally, the C++ language integrates nicely into the more minute processes you may want to program for Unreal. For instance, it shines when you are processing longer arrays and loops that would otherwise be overwhelming to use using blueprints. You can also use C++ for making custom components and game mechanics that would otherwise be difficult in higher level languages.

There are many more areas and disciplines we can talk about when it comes to C++, but the bottom line is that C++ gives you more control with memory. This consequently means more control over the systems that you can work with when developing your game.

Sample C++ code for an Unreal Engine game project. Syntax highlighting provided by Visual Assist plugin.

Comparing Blueprints and C++

When you are starting out in development in Unreal you will often find a clash of opinions on whether you should learn the blueprints system or dive into it with C++. Some people use C++ or blueprints exclusively—here are two summaries of these two views:

Why people may start with ONLY blueprints:

Blueprints are much easier to pick up. You don’t need to dive into complex code—everything’s visual. You’re basically dragging and connecting nodes to create mechanics, which means you can start building right away. 

There is no need to learn C++ before you can make something cool. If you’re new to Unreal Engine or game development in general, this is a huge plus because you can see results fast, without getting stuck on syntax or debugging.

And here’s the thing: Blueprints were introduced by Epic themselves. Similar to all the options available to you inside the engine, blueprints is a super powerful system that can be used for most game mechanics. 

Unreal Engine has optimized them to run smoothly, and unless you’re doing something really performance-heavy (like complex physics simulations), Blueprints will handle it just fine. You can even do advanced logic in Blueprints—things like AI, UI, and game state management—without needing to touch C++.

The other big advantage is speed—not computing speed, that’s C++’s zone. We’re talking about prototyping speed, especially in the early stages of development. Blueprints lets you iterate faster. You can make changes on the fly, test new ideas, and tweak mechanics without waiting for code to compile or worrying about errors. It’s especially helpful in small teams or solo projects where you need to move quickly and stay creative.

Also, Blueprints make it easier for non-programmers (like designers or artists) to collaborate. If you’re working with others, they can understand and adjust the game mechanics without needing to learn C++. 

Now, that’s not saying Blueprints are the only answer, but for most cases, especially if you’re starting out or need to quickly build and test, they’re perfect jumping boards. You can always add C++ later if you need more control or optimization. But for rapid development, ease of use, and accessibility, Blueprints are a great way to go.

So, why Blueprints? Easy to learn, fast to prototype, powerful for most tasks, and great for collaboration. You can always dive into C++ later, but for getting started and getting things done, Blueprints are more than enough!

Why people may start using ONLY C++:

C++ can sound intimidating compared to Blueprints, which lets you drag and drop things easily. But here’s why C++ is worth the challenge. Think of Blueprints like using LEGO blocks—you can build cool things, but you’re limited to prefabs. You can only build stuff with the pieces you have. What if you wanted to create a curved surface when there’s no curved block available?

In C++, you can make your own custom blocks. Curved, straight, jagged, irregular, all’s available for you to create yourself. You can control every detail of how your game works, especially when you want something that Unreal Engine doesn’t offer by default.

Now, performance. When your game gets complex, like with a huge world or a fast-paced multiplayer, C++ runs circles around Blueprints. It’s just faster, talking directly to your computer’s hardware. Imagine you’re building a or an MMO—C++ will handle massive tasks way better than Blueprints. It’s the difference between a race car and a scooter.

And here’s a big one: the industry loves C++ developers. If you master it, you’re not just a game designer—you’re in high demand. Studios know C++ developers can dig deep into the engine, creating systems that Blueprints just can’t match in complexity or performance. Plus, the skills you learn in C++? They transfer to tons of other tech fields like finance, AI, or data analysis.

C++ is harder, but mastering it means you’ll be able to do anything in Unreal + others. You’re not just stuck building with what’s given—you’re creating from scratch. It’s more control, faster performance, deeper understanding, and wider career options. It’s harder, but trust me, once you learn it, you’ll be unstoppable. 

Summary:

Blueprints C++
Ease of use Beginner friendly: Easier to pick up. Steeper learning curve
Readability Uses visual nodes signifying properties. Easy to understand but gets complicated with increasing number of nodes quickly. Uses C++ code bases and solutions. Requires more knowledge but  a few lines of code can be equivalent to a screen full of blueprints.
Flexibility (use cases) Limited by what is exposed in the Blueprint system; hard to implement highly custom systems. Allows full access to everything under the hood. Access the entire engine with custom mechanics and optimizations.
Performance Fast enough for most cases. Not advisable for complex or critical components  High-performance; handles resource-intensive mechanics more efficiently
Collaboration Easy to understand (even for non-programmers) Usually read and written by C++ programmers only.
Usage Primarily used for rapid prototyping, simple logic, assets, scripts, and visual FX Primarily large, complex systems, performance-critical code, advanced customization, and low-level engine access.
Maintenance Can become unwieldy in large-scale projects; hard to track and refactor visual logic. Easier to maintain in large projects with proper coding practices; easier to refactor and debug.
Integration Built into the Unreal ecosystem, works and compiles into C++ Built into the Unreal ecosystem, works with Blueprints

Now wait a minute… Focus on the last row on integration. Both C++ and the blueprint system are integrated into the Unreal development ecosystem and work with each other? So what should I focus on first? Continue to the next section to find out what our suggestion is on the most optimal way of developing in Unreal.

The Most Optimal Approach for C++ vs Blueprints – Our Suggestion:

Using blueprints and C++ are not exclusive. They are both ways to program mechanics, albeit at different levels. Utilize each according to the task requirements.

If you’re coming into this blog post as a bonafide beginner, (no experience with programming, no experience with Unreal) then the most likely best approach for you is to begin using Unreal’s blueprint system. You can expose yourself to the fundamentals of game development and see where you fit in. Are you going to be a game designer handling assets and world building primarily, or do you see yourself as someone who deals with designing the core mechanics of gameplay? 

Either way, it may be best for you to start with blueprints first as its beginner-friendly learning curve can help you answer these questions.

Now, if you have studied both approaches and have a basic understanding of Unreal development, and you’re looking for an answer to the question: What should I master first? Or which is better to use: BP or C++?

There is a false dichotomy between C++ and blueprints. C++ is a programming language, and Blueprints is a scripting system; you don’t have to use either exclusively. In fact, it’s actually better to use both simultaneously. C++ and Blueprints are integrated and allow easy interoperability. 

C++ is naturally better-suited for implementing low-level game systems, and Blueprints is naturally better-suited for defining high-level behaviors and interactions and for integrating aesthetic assets. But luckily for us, the game engine is designed so that you can jump back and forth between native C++ code and the scripting nodes.

The bottom line is that you can use both. Or you should use both so that you can get the benefit out of both systems.

The best way is to create custom C++ functions or classes. Then connect it all in blueprints.

Here is an example:

Say you need to implement a pathfinding mechanic for a small game board. It’s best to write the pathfinding algorithm logic in C++ where you have the benefit of increased logic density, clarity, easy and powerful debugging etc. then expose that to blueprints where you can call it.

It’s worth noting that blueprints weren’t created as an alternative to writing C++, rather, blueprints were created to compliment complex game systems built in C++, by making it very easy to do things like assigning property values in editor as opposed to hard coding it. So as you get more and more familiar with the engine, try creating systems in C++ that you can then extend in blueprints for a very efficient workflow.

With this in mind, our suggestion is to use blueprints and get exposure to how the engine works, and when you’ve hit a wall of complexity that isn’t feasible with blueprints, you can extract the complex logic to C++ and use blueprint nodes to wrap that logic. 

Visual Assist’s own lead developer, Chris Gardner, shows how you can use C++ to create your own powerup in Unreal’s sample shooter game.

By adopting this hybrid workflow, you leverage the best of both worlds: the power and performance of C++ and the user-friendly nature of Blueprints for rapid iteration and testing. As you evolve in your development skills, this combination will enable you to create more complex and engaging gameplay experiences with greater ease.

Developer Protip: Make C++ Development Even More Simple

A lot of the difficulties in C++ come with learning its syntax and how it connects with what you see in the Unreal Editor. C++ can seem intimidating because of the level of abstraction needed. Developers, especially beginners, need all the support they can get.

Choosing your integrated development environment (IDE) is a fundamental decision when you decide to start learning C++ for Unreal. It contains the basic tools required to write and test your game software. And additionally, it provides nifty support and helpful prompts that can guide you.

If you’re coding using Visual Studio (one of the IDEs recommended by Epic themselves), here’s a must-have plugin for Unreal Engine development: Visual Assist. It is a plugin that was made to help Unreal developers working inside Visual Studio. It helps you navigate huge projects. It replaces some IDE features such as  find references with better alternatives. And it even helps your IDE understand Unreal-specific syntax, giving you essential highlighting and context-aware prompts.

Make Visual Studio work better with Unreal development by using Visual Assist.

Visual Assist’s own lead developer, Chris Gardner, shows how you can use C++ to create your own powerup in Unreal’s sample shooter game.

Conclusion:

In conclusion, navigating the world of game development with Unreal Engine involves understanding the complementary strengths of C++ and Blueprints. While Blueprints offer a user-friendly and visually intuitive approach, allowing developers to quickly prototype and implement gameplay mechanics, C++ provides the performance, control, and complexity necessary for serious projects. By recognizing that these two approaches are not mutually exclusive but rather to be used symmetrically, developers can create more efficient game systems. 

By leveraging the unique benefits of both C++ and Blueprints, you position yourself to create more engaging and polished gameplay experiences. Ultimately, whether you’re a newcomer eager to start building or an experienced developer looking to refine your skills, understanding how to effectively combine these tools will be invaluable in your quest to master Unreal Engine. Hence, It is not a question of C++ or Blueprints, but a statement; C++ AND Blueprints.

The post C++ versus Blueprints: Which should I use for Unreal Engine game development? first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/c-versus-blueprints-which-should-i-use-for-unreal-engine-game-development/feed/ 0 3983
Success Story: Visual Assist for modeling and simulation software for automotive C++ https://www.wholetomato.com/blog/visual-assist-automotive-c/ https://www.wholetomato.com/blog/visual-assist-automotive-c/#respond Thu, 26 Sep 2024 17:50:44 +0000 https://www.wholetomato.com/blog/?p=3927 About the Client Based in Europe, the client is a global company specializing in the development and manufacturing of high-performance systems for vehicle technology. As a company that has been in the industry for over...

The post Success Story: Visual Assist for modeling and simulation software for automotive C++ first appeared on Tomato Soup.

]]>
About the Client

Based in Europe, the client is a global company specializing in the development and manufacturing of high-performance systems for vehicle technology. As a company that has been in the industry for over a century, their longstanding focus on innovation has positioned them as one of the top automotive manufacturers worldwide. As part of their commitment to quality, they have invested heavily in simulation tools for vehicle design, testing, and validation, ensuring efficiency and reliability for their partner manufacturers.

services offered by company

They engineer and produce various automotive technologies such as engine and electronics systems for passenger cars, commercial vehicles, and data measurement services.

 

Use case and challenges

We had the privilege of speaking with the lead developer and his team who create modeling and simulation software. We discussed their daily work and the challenges they face:

Use Cases:

  • They develop C++ applications in Microsoft’s Visual Studio for internal use.
  • They create bespoke programs for modeling components and simulating them in various scenarios.
  • Their primary language is using C/C++ in Visual Studio because it can be interfaced easily.

Challenges:

  • As an advanced tech provider, their workflow and output is highly specialized. Each project is tailor-made specifically for a certain client or customer.
  • They have huge legacy code bases that they have to maintain and modernize. 
  • Because of the precision involved in measurements, they handle large amounts of data from different sources of measurement.

Solution

Visual Assist was introduced to the team many years ago and it has since been a staple tool used daily by the developer team. They use Visual Assist for a variety of use cases including:

  • Refactoring and modernizing code is exponentially faster.
    Because their toolchain was initially built sometime in the 60’s, they had a lot of code modernization and translation projects. Then they also had to integrate them with new tools and update them to the latest coding standards.

    Visual Assist’s refactoring feature has been an indispensable asset in updating the outdated code structures, making them more readable, memory-safe, and maintainable. It takes the pain out of manually bringing legacy or deprecated code up to standard by automatically renaming variables or extracting methods, reducing the risk of introducing errors during manual updates. This includes refactoring to use modern, secure and safe coding styles. Effectively Visual Assist simplifies their C++ code maintenance so that they can focus on manufacturing and designing parts, not code.
  • Navigating old code and huge projects happens in a single click.
    Visual Assist greatly helps the team get around their huge legacy projects with smart navigation features. Finding and searching for certain sections of code is a cumbersome ordeal that VA just completely skips over with features like Find References, Find Symbols, the various Go To functions, and the like.
  • Snappier performance on large projects and solutions.
    When it comes to handling large amounts of data, Visual Assist’s optimized startup speed and low memory footprint provides the team snappy and accurate code assistance. Due to the repetitive nature of their projects, the few seconds that Visual Assist saves compounds over time and can boost productivity by as much as 20%. 

This non-exhaustive list is a testament to how Visual Assist can save hundreds of hours of valuable productivity time by providing smart suggestions, speedy features, and a satisfying experience for the Visual Studio IDE.

Interested?

Interested in getting the same benefits you or your team? Visual Assist is free to try for thirty days. 

Whether you’re looking to boost your team’s productivity or optimizing your own development process, now’s the perfect time to upgrade your toolkit with one of the most trusted Visual Studio plugins. Click on the link below to learn more about Visual Assist.

The post Success Story: Visual Assist for modeling and simulation software for automotive C++ first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/visual-assist-automotive-c/feed/ 0 3927
Getting started with how to use C++ for embedded systems in financial services https://www.wholetomato.com/blog/getting-started-with-how-to-use-c-for-embedded-systems-in-financial-services/ https://www.wholetomato.com/blog/getting-started-with-how-to-use-c-for-embedded-systems-in-financial-services/#respond Mon, 23 Sep 2024 16:56:12 +0000 https://www.wholetomato.com/blog/?p=3919 In today’s fast-paced financial technology landscape, the demand for robust, high-performance software is increasing. At the core of the majority of financial innovations lies C++, a language revered for its speed, efficiency, and control.  As...

The post Getting started with how to use C++ for embedded systems in financial services first appeared on Tomato Soup.

]]>
In today’s fast-paced financial technology landscape, the demand for robust, high-performance software is increasing. At the core of the majority of financial innovations lies C++, a language revered for its speed, efficiency, and control. 

As financial institutions continue to incorporate advanced electronics and embedded systems into their operations—be it through the ATMs we rely on for banking transactions, the sophisticated high-frequency trading platforms, or the secure transaction systems that protect our finances, C++ has become an indispensable tool.

Embedded systems are central to the proliferation of financial services which require real-time processing capabilities that only a highly performant language like C++ can provide. The financial sector’s demands for speed, precision, and security make C++ the language of choice for developers tasked with building the systems that underpin our financial infrastructure.

In this blog, we explore how C++ is used in these mission-critical financial systems. We’ll examine why it is suitable for embedded systems in finance.

Embedded systems in financial services

What are embedded systems?

Embedded systems are specialized computing systems designed to perform dedicated tasks within larger devices or systems. Unlike general-purpose computers, they are optimized for specific functions, often operating with real-time constraints and limited resources. Common examples of embedded systems include automotive control units, medical devices like pacemakers, and home appliances such as microwaves or washing machines. These systems are crucial in industries requiring precise control and efficiency, even outside the financial sector.

How embedded apps and digitalization are transforming financial software

The primary driver of the increasing demand for embedded systems is digitalization. Or to be more specific, inevitable progress in tech is opening more ways to serve underbanked communities; these opportunities require more and more digital alternatives to traditional banking. 

About two decades ago, the fintech model relied on singular banks serving a whole community. Today, every business is expected to accept payments through digital platforms, credit cards, and other payment platforms. This has minimized the red tape and payments and financial services have become more seamless.

For instance, e-wallets and banking apps on smartphones have certainly made financial services easier to access, however, physical devices must still be available for businesses to use as terminals and portals for digital transactions. This is where embedded systems on devices come in.

Examples of Embedded Systems used in financial services

Point-of-Sale (POS) Systems

POS systems are ubiquitous in retail stores, restaurants, and other businesses that accept payments. These systems integrate embedded processors and software to handle various functions like:

  • Accepting credit/debit card payments
  • Tracking inventory and sales data
  • Generating receipts and reports

POS terminals are essentially embedded computers designed for payment processing and business management.

ATMs (Automated Teller Machines)

ATMs are self-service banking kiosks that contain embedded systems in the form of peripheral devices. Embedded systems help the main PC operating system manage the user interface, cash dispenser, and card reader. It can also communicate with the bank’s central computer system.

Contactless Payment Terminals

Contactless payment terminals are embedded systems that enable customers to make payments by tapping or waving their credit/debit cards or mobile devices near the terminal. These terminals use near-field communication (NFC) technology and are commonly found at retail checkouts and transit fare gates. Smartwatches, fitness trackers, and other wearable devices can be embedded with payment capabilities.

Section 2: C++ in finance and banking

Why financial embedded systems use C++

Embedded systems use C++ because it lets developers control hardware directly while still keeping the code organized and easier to manage.There is a good mix of low-level hardware control and high-level programming abstractions. 

C++ is great for devices with limited memory or processing power, like small sensors or controllers, because it helps the code run fast. It also allows developers to write code that can work on different types of devices without starting from scratch. This makes C++ a popular choice for many embedded systems. Additionally, C++ offers portability, making it easier to adapt code across different embedded platforms.

The demands of financial software

In the financial sector, software systems face exceptionally high demands. These systems must deliver extreme performance, steadfast reliability, and robust security to support critical functions like real-time trading, transaction processing, and risk management. The stakes are incredibly high, as even minor software failures can result in significant financial losses, security breaches, and a loss of client trust. 

C++ is well-equipped to meet these rigorous requirements. Renowned for its speed and efficiency, C++ enables developers to create high-performance applications crucial for environments where every millisecond can impact trading results. Its low-level memory control allows for precise management of system resources, ensuring both stability and responsiveness in financial systems. Additionally, C++ is supported by a comprehensive suite of libraries designed for complex financial operations, making it an ideal choice for developing secure and high-performing financial software.

Advantages of the C++ language in Financial Software

C++ Property How it compares to other languages used in finance
Lower level language: C++ code compiles into highly efficient machine-like code, providing real-time processing capabilities and scalability.  Faster than interpreted languages like Python or JavaScript, which are unsuitable for real-time performance requirements.
Speed and performance:Handles intensive computational tasks with minimal overhead, making it ideal for high-performance applications. 
Similarly popular in finance programming, Python offers simplicity and faster development cycles. However, it lacks the execution speed needed for high-performance financial software. 
Embedded-Specific Support: (e.g., no-exception builds) allows you to disable certain features (like exceptions) to minimize overhead. Languages like Java have less flexibility in trimming down features for embedded use.
Scalability and processing power: Can accommodate increasing volumes of data and transactions, a necessity in a growing financial sector.
Java strikes a balance between usability and performance but cannot match the raw processing power and system control that C++ provides.

Section 3: The challenges for C++ programmers developing embedded systems

In the high-stakes world of financial systems, performance optimization is not merely an option but a critical necessity. Financial applications, such as high-frequency trading platforms and real-time risk management systems, operate under intense performance constraints where even the smallest delay can have significant repercussions. As a result, C++ developers are tasked with continuously fine-tuning their code to meet performance requirements.

One of the primary challenges in this optimization process is managing memory. C++ provides low-level control over memory allocation, which allows for precise performance tuning but also demands that developers manually handle memory management. This responsibility includes careful allocation and deallocation to prevent memory leaks and ensure efficient resource utilization. 

Additionally, reducing latency is crucial in financial applications where timely processing of data and execution of trades are essential. Developers must implement strategies to minimize latency, which involves optimizing algorithms, data structures, and reducing the impact of I/O operations. Productivity enhancing tools such as Visual Assist C++ that simplify refactoring help here immensely as they can help spot unnecessary elements—more on helpful tools later. 

Maintaining code quality while optimizing performance presents another challenge. Performance enhancements often require low-level changes to the code, which can complicate readability and maintainability. Balancing the need for high performance with the necessity of keeping the codebase understandable and manageable is a continuous struggle for C++ developers working in the finance sector. 

Readability is an often underestimated facet of development. Embedded code can often be hard to read, or drop from C++ to lower-level C. For instance, when accessing IO pins on an embedded device via a cable plugged into “general purpose IO pins” (GPIO) you have to use the base-level language that can communicate with the hardware itself.  At that point, it’s key to have tooling that helps you understand and verify your code when you run it back from higher and lower abstraction between languages.

As simple as possible: C++ vs Embedded C++

When discussing C++ versus Embedded C++, it’s essential to understand that while they share a common language foundation, the environments in which they are applied significantly influence the design, usage, and constraints of these two variants.

The main difference with C++ in embedded systems is that it has to be more efficient because devices often have limited memory and processing power. Embedded C++ also involves directly controlling hardware, like sensors and processors, which isn’t as common in traditional C++. Finally, some C++ features, like dynamic memory management, are used less or even avoided entirely in embedded systems to avoid performance issues. Rather than using the standard STL, it’s common to use other libraries tailored for embedded use, like the ETL.

  • Memory management and constraints

C++ on a desktop or server system operates in a much more forgiving environment. It has access to extensive memory, high processing power, and can rely on an operating system for memory management and multitasking. In contrast, Embedded C++ targets microcontrollers or other resource-constrained devices, where memory (both RAM and flash) is limited, and there may not be an operating system at all.

For instance, in an embedded system, dynamic memory allocation using new and delete can be risky due to fragmentation, leading to memory exhaustion over time. Many embedded systems developers avoid heap allocation entirely, preferring static or stack allocation, or using custom memory management techniques tailored to the system’s constraints.

Some devices  such as ATMs or POS systems need a small amount of flash memory, a form of non-volatile memory, to keep a small database. For example, some systems need to keep the past 24 hours of transactions on the system itself as a backup for when the bank network has gone down unexpectedly. For these cases, reliable memory-efficient libraries for compression and embedded databases are used.

  • Performance and real-time requirements

Another significant difference arises in performance and real-time behavior. In standard C++ applications, performance is still important, but not necessarily tied to hard real-time requirements.

In contrast, embedded systems often have strict timing constraints, and code must execute within a specific time frame to meet system requirements. This demands careful optimization and the avoidance of certain C++ abstractions that can introduce unpredictable execution times.

For example, C++ standard library features like the Standard Template Library (STL) may not be suitable for embedded environments. Functions like std::vector or std::map can introduce hidden memory allocations and performance overhead, which can be detrimental in a real-time system. 

As a result, embedded C++ developers often resort to using lightweight custom libraries or writing their own data structures optimized for their specific hardware. You can use libraries like the embedded template library that provides STL-like functionality intended for embedded devices. You can also search this list of libraries from Github user “fffaraz” using the search term “embedded” for more resources specific to embedded systems.

  • Hardware Interfacing

Embedded systems often require precise control over hardware peripherals, like I/O pins, timers, or communication interfaces. This entails hardware-specific code, where developers directly manipulate memory-mapped registers to control the device.

In standard C++, you rarely deal with such low-level hardware specifics. Embedded C++ developers, however, often need to interact directly with hardware registers and bit manipulation, as shown in the examples with the ATM or POS systems. This introduces a level of complexity not typically found in standard desktop or server C++ development.

  • Debugging Challenges

Due to the very embedded nature of embedded systems, debugging is inherently more complex due to the lack of typical debugging resources available in standard C++ environments. Desktop developers can rely on sophisticated debuggers, full IDEs, and graphical interfaces to step through code, inspect memory, and trace program execution. In contrast, embedded developers often work without these luxuries. 

Debugging tools may be limited to physical devices that plug into the circuitry, or maybe testers and emulators that merely simulate the device. The best case scenarios will involve some form of rudimentary debugging tool integrated into the device. But for the most part, it will still be a step down from traditional C++ debugging.

Section 4: Pro tips for C++ developers for embedded systems

If you’re a novice developer or an intermediate C++ developer that’s looking to specialize as a embedded software developer, here are a couple of core competencies and guiding ideas that you can study, arranged in order of importance:

  • Understand the embedded systems basics
    Understanding the fundamentals of embedded systems and how they differ from general computing.

    • What are embedded systems? (Microcontrollers, sensors, actuators, etc.)
    • Key differences between embedded and traditional software development.
    • Real-time systems and their importance.

Recommended read/watch: “Introduction to Embedded Systems” by Jonathan Valvano (Textbook).

  • C++ for Embedded Systems
    Learning how C++ is used in resource-constrained environments.

    • Writing memory-efficient and performance-critical code.
    • Avoiding dynamic memory allocation (heap vs stack).
    • Using low-level hardware interfaces (registers, ports, etc.).

Recommended read/watch: “Embedded: Customizing Dynamic Memory Management in C++” by Ben Saks in CppCon 2020.

  • Learning Microcontrollers
    Gain practical experience with microcontrollers, one of the basic programmable elements in embedded development environments.

    • Introduction to microcontrollers (e.g., ARM Cortex, AVR, ESP32).
    • Setting up a development environment (IDE, toolchains).
    • Flashing code to the microcontroller.

Recommended read/watch: “C++ For Microcontrollers – Introduction”  by Mikey’s Lab

  • Optimization and Power Management
    Learn how to optimize embedded C++ code for performance and power consumption.

    • Code optimization techniques (e.g., loop unrolling, inline functions).
    • Power-saving modes in microcontrollers.
    • Balancing performance and power consumption.

Recommended read/watch: “Introduction to Embedded Systems” by Jonathan Valvano (Textbook).

  • Debugging Techniques for Embedded Systems
    Get a proper introduction to the  debugging techniques specific to embedded development.

    • Using in-circuit debuggers (ICDs) and logic analyzers.
    • Setting breakpoints, watching variables, and stepping through code.
    • Dealing with hardware-software integration bugs.

Recommended read/watch: Variety of courses from Feabhas

Visual Studio as the Go-To IDE

In embedded systems  C++ development, a few IDEs stand out for their ability to handle high-performance applications. CLion by JetBrains is popular for its strong code analysis and integration with CMake, supporting multi-platform projects. Its tools for memory profiling and real-time inspections are especially useful in financial software, where precision is key.

Eclipse CDT offers flexibility and powerful debugging features, with support for plugins and external tools like GDB. Its open-source nature makes it a cost-effective choice for developers aiming to optimize performance.

However, Visual Studio is the industry’s top choice, thanks to its advanced debugging tools like breakpoints and call stack analysis, essential for resolving issues in complex financial applications. For custom hardware, it’s common to only get Visual Studio support. It also offers code analysis, performance profiling, and cross-platform support, including Linux. These features make Visual Studio a comprehensive and scalable option, ideal for financial developers seeking reliability across multiple platforms.

Enhancing Productivity with Visual Assist

For C++ developers working in finance, Visual Assist is an indispensable extension that significantly enhances productivity. This powerful tool integrates seamlessly with Visual Studio, offering a range of features designed to make coding faster and more efficient.

A practical example of how Visual Assist can accelerate development is its Convert Pointer to Instance refactoring feature. In financial applications, optimizing memory usage is critical. This feature allows developers to easily convert heap-allocated pointers to stack-allocated instances, which can enhance performance and reduce memory overhead. By simplifying these refactoring tasks, Visual Assist helps developers focus on implementing and refining the core functionalities of their financial software. 

In summary, Visual Studio combined with Visual Assist provides a powerful toolkit for C++ developers in the finance industry, enhancing both the development experience and the quality of the final product.

Section 5: The Future of C++ in Embedded Systems for Finance

Emerging Trends

The integration of embedded systems into financial applications is becoming increasingly prevalent, driven by advancements in technology and the growing need for real-time data processing and enhanced security. Embedded systems, such as Internet of Things (IoT) devices and advanced security systems, are playing a crucial role in modern financial infrastructure. For example, IoT devices can provide real-time analytics and monitoring for financial transactions, while sophisticated security systems use embedded technology to protect sensitive data and prevent fraud. 

C++ is well-positioned to adapt to these emerging trends due to its versatility and efficiency. As embedded systems become more integral to financial applications, C++ continues to offer the performance and control needed to develop robust solutions. The language’s ability to interface directly with hardware and manage resources at a low level makes it ideal for embedded development, where precision and efficiency are paramount. Additionally, C++ is evolving to support new standards and libraries that enhance its capabilities for embedded applications, ensuring that it remains a key language in the financial sector’s future.

Preparing for the Future

To stay ahead in the field of C++ development for embedded systems, it is essential to engage in continuous learning and stay abreast of technological advancements. The financial sector is rapidly evolving, and developers must be proactive in acquiring new skills and knowledge to remain competitive. This includes familiarizing oneself with the latest developments in embedded systems, such as new IoT protocols and security technologies, as well as advancements in C++ standards and tools.

Leveraging new tools and technologies can also significantly impact productivity and reduce stress in high-pressure environments. For instance, adopting modern IDEs and development environments that offer powerful debugging, profiling, and refactoring capabilities can streamline the development process and help manage the complexities of embedded systems. Tools that automate routine tasks and provide advanced code analysis can save valuable time and reduce the cognitive load on developers, allowing them to focus on more strategic aspects of their work.

In summary, the future of C++ in embedded systems for finance looks promising, driven by the increasing integration of advanced technologies and the language’s continued evolution. By staying informed about emerging trends and adopting tools that enhance efficiency and reduce stress, C++ developers can position themselves for success in this dynamic and evolving field.

Conclusion

In this blog, we’ve explored the pivotal role of C++ in the development of financial software and embedded systems, highlighting its unmatched performance, reliability, and efficiency. We discussed how C++ meets the rigorous demands of financial applications by offering precise control over system resources and supporting complex, high-performance operations. Additionally, we examined the common challenges faced by developers, such as performance optimization and debugging, and how tools like Visual Studio and Visual Assist can alleviate these difficulties.

As financial systems continue to evolve and embedded systems become more integrated, C++ remains a critical language due to its adaptability and powerful capabilities. The language’s ability to deliver real-time processing and manage resources efficiently ensures its continued relevance in the financial sector.

We encourage readers to explore the benefits of Visual Studio and Visual Assist to enhance their development process. By leveraging these tools, developers can streamline their workflows, improve code quality, and handle the complexities of high-performance financial software more effectively. Embracing these technologies will not only improve development efficiency but also contribute to the creation of robust and reliable financial systems.

The post Getting started with how to use C++ for embedded systems in financial services first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/getting-started-with-how-to-use-c-for-embedded-systems-in-financial-services/feed/ 0 3919
The biggest challenges in writing C++ programs for finance and banking https://www.wholetomato.com/blog/the-biggest-challenges-in-writing-c-programs-for-finance-and-banking/ https://www.wholetomato.com/blog/the-biggest-challenges-in-writing-c-programs-for-finance-and-banking/#respond Wed, 28 Aug 2024 05:44:14 +0000 https://www.wholetomato.com/blog/?p=3899 Introduction When it comes to developing software for the finance and banking industry, C++ is often the language of choice due to its performance, efficiency, and flexibility. However, writing C++ programs in this highly regulated...

The post The biggest challenges in writing C++ programs for finance and banking first appeared on Tomato Soup.

]]>
Introduction

When it comes to developing software for the finance and banking industry, C++ is often the language of choice due to its performance, efficiency, and flexibility.

However, writing C++ programs in this highly regulated and fast-paced environment comes with its own set of challenges. From managing the complexity of legacy codebases to ensuring real-time performance for trading systems, developers face numerous hurdles. Regulations and stringent security measures, compliance with industry regulations, and the ever-present demand for high reliability and accuracy also compound this problem. 

In this blog, we will explore some of the biggest challenges C++ developers encounter when creating software solutions for the finance and banking sector.

Why use C++ in Financial Software

Banks and financial institutions are always looking to improve their trading infrastructure and upgrade their data-management capabilities. Having the best financial model mathematical models help generate profits and reduce risk in a highly volatile and time-sensitive market.

And it just so happens that C++, a low-level language, is the top choice due to its speed and efficiency, making it a preferred choice for high-frequency trading platforms, risk management systems, and other critical financial applications.

The challenges to becoming a programmer in the financial industry

When you’re a developer in the financial industry, it’s almost always a given that apart from being able to program, you would also be able to understand the math to validate various financial models. Some developers may also conduct research and hypothesize on new trading strategies themselves.

Becoming a quantitative analyst, bank developer, or high-frequency trader can be very lucrative career choices. However, it also means that there are stricter requirements and skill sets to be qualified.

As an aspiring developer, here are the key problems and frustrations that C++ developers in the financial industry should keep in mind:

Training requirements and developer skill set

  • Steep learning curve
    You can be a decent trader and a researcher using basic programming and scripting languages such as Python. But on the other hand, knowing C++ from just a broad level won’t be able to help you as much since you won’t be able to utilize the low latency advantages. If you really want to implement models and develop applications for the industry, there is a certain level of optimization skills you need first.
  • Understand modeling and simulations. It comes as no surprise, but there is a hefty amount of math involved in the financial industry. Financial algorithms can be mathematically intensive, requiring developers to have a strong understanding of quantitative finance and numerical methods.
  • Need to invest in skills other than programming? Developers often need to implement complex models that simulate market conditions or risk factors, which requires a deep understanding of both finance and C++. However, this is less of a problem if you’re working with a diversified team of developers, traders, and analysts.

Programming requirements: Performance Optimization

  • Low Latency Requirements
    Financial applications, especially in trading, require extremely low latency. Developers must continuously optimize their code to reduce execution time to microseconds or even nanoseconds.
  • Resource Management
    Efficient memory management is crucial—each unoptimized bit of code can amount to micro delays that can be the difference between a winning and a losing trade. C++ developers need to carefully manage resources, avoid memory leaks, and ensure optimal memory performance in their code.
  • Accuracy and code correctness: Financial applications often rely on parallel processing to handle large volumes of data. The source code and the project itself may not be massive, but the intricacies involved must be accurate because of the sensitive nature of market prices. Still, managing developer mistakes and errors in C++ can be challenging and error-prone.

Programming requirements: Performance Optimization

  • Low Latency Requirements
    Financial applications, especially in trading, require extremely low latency. Developers must continuously optimize their code to reduce execution time to microseconds or even nanoseconds.
  • Resource Management
    Efficient memory management is crucial—each unoptimized bit of code can amount to micro delays that can be the difference between a winning and a losing trade. C++ developers need to carefully manage resources, avoid memory leaks, and ensure optimal memory performance in their code.
  • Accuracy and code correctness: Financial applications often rely on parallel processing to handle large volumes of data. The source code and the project itself may not be massive, but the intricacies involved must be accurate because of the sensitive nature of market prices. Still, managing developer mistakes and errors in C++ can be challenging and error-prone.

Programming requirements: Compliance and Regulations

  • Compliance with regulations
    Apart from being mathematically complex enough as it is, financial software must comply with stringent regulations within the company and the government. Developers need to ensure that every bit of their code adheres to compliance requirements—this can vary by region and change frequently.
  • Auditability
    The code must be auditable, meaning that it should be easy to trace and understand how financial decisions are made by the software, which adds another layer of complexity.
  • Vulnerability Management
    There are many available libraries and third party extensions for C++ developers. Developers, however, need to stay on top of potential vulnerabilities in C++ libraries or the codebase itself to prevent exploits.

Tips for facing these challenges

  • Study the math, polish your C++
    As mentioned earlier, you can be a pure developer and just implement whatever algorithms that are supplied to you. But to become a better analyst and interpret trends yourself, you need to equip yourself  with more than programming skills.If you’re looking to familiarize yourself with the concepts, there are many great resources available such as Investopedia. For specific use cases or general C++ skills, a good old reference book (such as those from Scott Meyers or one from Bjarne Stroustrup himself) will always be great options.For references regarding high performance C++, there are also great resources online such as:

  • Invest in understanding above and beyond your tasks

Banks and financial institutions, especially top ones, will only hire the cream-of-the-crop devs. Average devs with pedestrian level finance knowledge will be less appealing for the simple fact that for an expensive role, financial firms expect the maximum returns. 

This often means that being a financial developer entails learning and understanding current market trends, calculating opportunity costs, and economic theories yourself—not just the technical aspects of implementing them into an algorithm.

  • Get all the help you can

Take note of tidbits of knowledge you’ll pick up on the spot from existing codebases accessible to you. Colleagues may also come to you directly and give you advice on how best to tackle certain financial puzzles.

As for developer tools, they are oftentimes underestimated in terms of how helpful technology can be when you’re developing software and finance algorithms. Having a conducive and smart development environment can be the small difference between a timely implementation hauling your company massive profits, or an unfortunate missed opportunity.

Try to invest in software that allows you to focus and concentrate on the core work such as thinking and planning. For example, there are many productivity tools available online that seek to help developers monitor their code’s quality. There are also tools that help in maintaining or refactoring code bases. These are all tools that can help you stay on the cutting edge.

Protip for those coding in Visual Studio C++

Visual Studio remains the premier IDE for C++, especially serious C++ programming such as financial services. That includes deploying to Linux. Visual Studio is a robust IDE for developing C++ financial programs because it offers powerful debugging and code analysis tools, which are crucial for maintaining high-quality, error-free code in critical financial applications, plus strong performance and profiling tools. 

It provides extensive support for modern C++ standards and libraries, ensuring compatibility and performance optimization. The IDE integrates well with various version control systems, enabling smooth collaboration and code management among development teams. Additionally, Visual Studio’s extensive ecosystem of extensions and plugins allows developers to customize their environment to fit specific financial industry requirements.

There are general plugins that augment the entire IDE with faster processes and more intuitive workflows. For example, Visual Assist, one of the most popular VS extensions, provides faster ways to navigate projects, convenient one-click solutions to maintaining code, and additional syntax support not available in the default VS IDE. Here are some specific features:

When writing high performance C++ you’ll find yourself doing things like (for example) avoiding memory allocation, and Visual Assist’s set of refactorings can assist with all sorts of work that can move code around to assist your improvements. A trivial example is converting a heap-allocation to a stack allocation via the Convert Pointer to Instance refactoring.

You can’t underestimate how helpful it is especially in a high-stress and time-sensitive profession.

Those jobs are high stress and lots of crunch is expected. Our navigation features get you around much faster than the built in tools Open File in Solution, Find Symbol in Solution and Find References just works that much better and faster.

Conclusion

Becoming a programmer in the financial industry is no small task. There are many significant challenges presented to you both as a programmer and as a learner.  It is a constantly evolving profession—like a perpetual hackathon. You have to stay on top tech and industry trends to ensure your company is getting the best results it can. 

Study beyond your delegation. Utilize all the tools at your disposal. And most importantly, persevere. 

The post The biggest challenges in writing C++ programs for finance and banking first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/the-biggest-challenges-in-writing-c-programs-for-finance-and-banking/feed/ 0 3899
Catching up with VA: Our most recent performance updates https://www.wholetomato.com/blog/catching-up-with-va-our-most-recent-performance-updates/ https://www.wholetomato.com/blog/catching-up-with-va-our-most-recent-performance-updates/#respond Sun, 21 Jul 2024 21:31:14 +0000 https://www.wholetomato.com/blog/?p=3848 Throughout its long lifetime, Visual Assist (VA) has been a top-of-the-line productivity plugin with a performance advantage over Visual Studio and other plugins. Performance and speed has been a bread-and-butter factor for choosing VA—and we’ve...

The post Catching up with VA: Our most recent performance updates first appeared on Tomato Soup.

]]>
Throughout its long lifetime, Visual Assist (VA) has been a top-of-the-line productivity plugin with a performance advantage over Visual Studio and other plugins. Performance and speed has been a bread-and-butter factor for choosing VA—and we’ve doubled down with updates focused on cutting interruptions and load times.

VA had significant improvements in 2024, particularly in the initial startup time for projects, as well as in the responsiveness of a few key features. It is not farfetched to say that performance has been the primary consideration for the development direction of the plugin.

Why? Because performant, responsive software is productive software, and fast interaction is key to you getting your work done.

We have a lot of solid, robust features based purely on providing not the kitchen sink, but what you need. We already had a reputation for being faster than other products. Now we are even faster: VA is a lean, mean, coding machine.

We’re midway through the year, and we’re summarizing all the recent performance updates in this handy update blog. Read on further to get a more complete picture of when and why these changes were introduced to VA.

Faster startup sequence

Whatever task you have, you first must open and launch Visual Studio—along with any installed plugins you have. Opening a Visual Studio-associated file initiates the startup process which starts loading the essential IDE assets, the solution files you have chosen, and ultimately any auxiliary components like Visual Assist.

While we cannot alter the core loadout of Visual Studio, we’ve worked on every facet of our tool that can be optimized for faster startup:

  • Project initial parsing

    Project parsing is an extra step that code assistant plugins like Visual Assist need to undertake. VA uses its own parser independent of Visual Studio’s which allows it to pre-scan projects so it can be faster, smarter, and able to provide different functions.

    The release in January 2024 featured an overhaul of the parser, which reduced startup times for opening previously unparsed project files by up to 15 times.

    While an initial parse is only done the first time you open a project. The next time you open it, it will be instant. (This was an existing feature.) However, we made it 15x faster for those of you who are opening multiple new projects on a more frequent basis.

    For example, an Unreal Engine project with its typically massive code base previously took 15 minutes to parse. We’ve brought this down to a mere one minute of parsing.

    Tech details: Visual Assist implemented a cache for parsed directories to bypass slow Windows file IO API calls where the same call is expected to give the same result—this significantly reduced the initial parse time.

     

  • Plugin load time

    This update refers to the time it takes for Visual Assist’s features to become functional. As mentioned above, the time-to-functional is the sum of all Visual Studio’s startup routine which includes loading in plugins.
    Every time you close and open a solution, VA’s features take a few moments to load—or at least that’s how it was before. With this update, time-to-functional is more or less instantaneous even in extremely large solutions! 

    As soon as Visual Studio calls on Visual Assist to start loading, you’ll immediately see coloring and syntax highlighting, and have access to all navigation and features. (Note: How Visual Studio initializes plugins and components is indeterminate; results may vary slightly depending on how many components it loads first before Visual Assist.)

    What these changes mean for you:

     

    Depending on how often you need it, the Visual Studio startup sequence and project load can be a part of your feedback cycle when testing and coding. And even a mere 30 seconds are painful and a threat to productivity when repeated, especially when they add up in a work week.

    This is even more pronounced when your work entails opening new projects multiple times in a week. Visual Assist is the best in-class plugin that offers significantly less startup time—giving you more time to be productive.

    READ: Visual Assist startup duration update

Search dialogs: Find References and File Finding

Since starting our crusade against a slow and unresponsive IDE,  there have been two updates that shortened the loading time for finding references and symbols. Utilizing techniques such as parallelism and removing extraneous string searches, you’ll  enjoy up to ten times faster search time.  

Furthermore, better accuracy and new functionality has been added for other search dialogs, including fuzzy search for Open File in Solution.

  • Find references speed and responsiveness

Find references is a feature that looks for symbol usage within the current project or solution. Depending on the project size, there may be hundreds to thousands of symbol definitions in your solution, and many of those, tens of thousands of times they may be used. In order for code navigation to work, VA must scour its database for the correct results.

Find references time increases with the number of symbols in the database. However, VA’s feature has been greatly improved for performance and speed—almost ten times faster than before! That means that this performance improvement applies to many key features and navigations.

Some other common and key features in VA improved by this change: 

  • Renaming finds references in order to rename them.
  • Implement Methods finds methods in order to know which ones do and do not exist
  • Change signature works similarly.

    Visual Assist’s Find references window. Takes significantly less time to find all references in 2024.3.

  • Fuzzy search and uppercase search for opening files and searching symbols

    Fuzzy search is a technique used in searches and information retrieval to find approximate matches for a given query, accommodating variations like typos and misspellings. It employs string distance metrics to measure the similarity between strings.

    Apart from being fast, Open File in Solution and Find Symbol in Solution support this technique, so you can expect more meaningful results with fewer, less accurate search queries.

    Furthermore, beyond fuzzy searching for inexact matches, VA will also match capital letters. For example, if you have a class named MyClassName, searching for “mcn” would find it. Similarly, suppose you have a global variable named myGlobalVariable and type “mgv” – the lowercase “my” is treated as if it were MyGlobalVariable, providing expected results.

  • Move Class feature

    Refactoring and moving entire classes can be a hassle. This feature has completed its beta phase to provide full support for porting an entire class to the file(s) of your choosing.

  • Bonus QoL Change: Select all items in open file in solution (Ctrl + A)

    You can now select and highlight multiple files and open them simultaneously when using open file in solution. The usual shortcut Ctrl + A works.

    What these changes mean for you:

    As a C++ developer, you frequently search for files and symbols in massive projects. So even small reductions in wait times or interruptions cumulatively boost your overall productivity to a significant degree.

Summary

Performance improvements are and will remain the focus of Visual Assist in upcoming releases. As projects grow larger and C++ features grow in complexity, we too must adapt and scale our performance to meet the increasing workload and demands on our parser and product capabilities.

This is our most important aim: speedy performance and accurate responses so you can focus on thinking and problem solving—the crucial parts of coding.

We’re only halfway through the year, so let us know what we should improve upon next. Thank you for your continued use and support of Visual Assist!

The post Catching up with VA: Our most recent performance updates first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/catching-up-with-va-our-most-recent-performance-updates/feed/ 0 3848
Visual Assist 2024.4 release post – ARM Support https://www.wholetomato.com/blog/visual-assist-2024-4-release-post-arm-support/ https://www.wholetomato.com/blog/visual-assist-2024-4-release-post-arm-support/#respond Wed, 12 Jun 2024 22:00:59 +0000 https://www.wholetomato.com/blog/?p=3839 It’s our pleasure to announce a new Visual Assist release, headed by a major addition—supporting ARM! We hope you find this release useful. Visit our website to download the release. ARM support Big news for...

The post Visual Assist 2024.4 release post – ARM Support first appeared on Tomato Soup.

]]>
It’s our pleasure to announce a new Visual Assist release, headed by a major addition—supporting ARM!

We hope you find this release useful. Visit our website to download the release.

ARM support

Big news for Visual Assist’s device support! Windows ARM is now supported starting this release, Visual Assist 2024.4. Visual Assist is now available as a fully ARM-native plugin, fully supported in Visual Studio’s ARM build. This means that Visual Assist is now fully compatible for those of you using Macs or Windows devices with an ARM processor. 

We first asked our community about ARM support some time ago. At the time, while it was clear ARM was growing for Macs, it was unclear how strongly it would grow for Windows and we planned support at a future time. Since then, we’ve seen growing interest and customer requests – and we’re happy to deliver! The appearance we see is that many people, including large companies, are increasingly interested in or using ARM for Windows.

There are many advantages to using Windows ARM devices, from battery usage to performance. One key one is that many developers target ARM devices and are used to debug remotely; while debugging on-device or on-simulator remains important, it can be slow and doing minute-to-minute development on a device that shares the same CPU architecture can be very useful.

ARM is a completely new front for us and we would like to know more about how we can improve the experience for ARM users. If you’re part of the group that would benefit from this update, please let us know more by answering this short survey.

Path “/” delimiter

This simple change adds an option for users who are used to using “/” as path delimiters for searching directories. This comes at the heel of users from different operating systems sharing how their default style of delimiter is not supported. 

With this change, you can now choose what the default delimiter will be used. This will apply to most of Visual Assist’s search windows such as Open file in solution and the like.

Bug fixes and improvements

Apart from the above major fixes, we have a couple of minor bug fixes and QoL changes. The highlights are a fix for recognizing one of the features in the standard library. 

The complete list is below: 

  • Fixed issue where std::tuple would not be recognized in some cases.
  • Move Class to New File will no longer jump to a new file before showing the dialog.
  • Fixed broken Discord invite link.

Send us a message or start a thread on the user forums for bug reports or suggestions. Don’t forget to join our Discord too!

Visit our download page to update to the latest release manually. Happy coding!

The post Visual Assist 2024.4 release post – ARM Support first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/visual-assist-2024-4-release-post-arm-support/feed/ 0 3839
Visual Assist 2024.3 release post https://www.wholetomato.com/blog/visual-assist-2024-3-release-post/ https://www.wholetomato.com/blog/visual-assist-2024-3-release-post/#respond Thu, 02 May 2024 20:42:22 +0000 https://www.wholetomato.com/blog/?p=3811 Another Visual Assist update?! VA 2024.3 is headlined by a dramatic improvement to the performance of Find References. This release also features both a fix and an improvement related to Move Implementation. We also have...

The post Visual Assist 2024.3 release post first appeared on Tomato Soup.

]]>
Another Visual Assist update?! VA 2024.3 is headlined by a dramatic improvement to the performance of Find References. This release also features both a fix and an improvement related to Move Implementation. We also have some key features exiting their beta phase (try them out!). Lastly, performance for C# should be better than ever with key fixes rolling out in this release.

Download the release now from our website.

Better find references results in multiple faster features

If you’ve updated to at least Visual Assist 2024.1, you may have been enjoying the benefits of the significantly improved parser performance that cut initial parsing time fifteenfold. In this release, we’ve added something even bigger: performance improvements not at startup, but all the time

Find references, the feature that looks for symbol usage within the current project or solution, has been greatly improved for performance and speed. But the Find References engine is used for many other common and key features in Visual Assist! Renaming finds references in order to rename them; implement methods finds methods in order to know which ones do and do not exist; and so forth. That means that this performance improvement applies to many key features and navigations; Rename, Change Signature, Implement Methods and more.

Visual Assist’s Find references window. Takes significantly less time to find all references in 2024.3.

Test Results

The development team ran a few tests to compare the performance of find references between the new Visual Assist version versus an older version of the same plugin. Furthermore, they also tested it against the performance of Visual Studio’s default Find References. 

The test was done on Unreal Engine 5.3 source code using Lyra game examples with two symbols: TOptional and MakeBox as the basis for which references are to be searched. The test was done using Visual Studio 2022 17.8 and Visual Assist 2024.3 & 2024.2. Time was measured from the start of Find References to all references found.

The result of the tests are as follows:

Setup 1 – TOptional:

Run 1 Run 2 Run 3 Average
Visual Assist 2024.3 5:11 4:25 4:17 4:37
Visual Assist 2024.2 14:27 18:02 13:12 15:13
Visual Studio 2022 38:26 * * 38:26
Setup Specs:AMD Ryzen 7, 7800X3D processor, Team T-Force Delta 32GB (2 x 16GB) 288-Pin PC RAM, Crucial T700 Gen5 NVME M.2 SSD
* Test timeout. 

 

Setup 2 – MakeBox:

Run 1 Run 2 Run 3 Average
Visual Assist 2024.3 0:42 0:45 0:43 0:43
Visual Assist 2024.2 1:41 1:40 1:34 1:38
Visual Studio 2022 2:34 2:22 2:27 2:27
Setup Specs:AMD Ryzen 7, 7800X3D processor, Team T-Force Delta 32GB (2 x 16GB) 288-Pin PC RAM, Crucial T700 Gen5 NVME M.2 SSD

As one can surmise from the results, the latest update brings Visual Assist’s symbol finding performance well above that of default Visual Studio’s and other similar plugins. Further testing on other platforms will be undertaken. Please refer back to this page later for more testing.

Exiting Beta: CUDA core development support & Move Class feature

Two VA features enter their stable phase and are now on general availability. If you have not tried these yet, we highly recommend trying them out as it provides a lot of usefulness that might not be readily apparent.

  • CUDA support
    First added in 2023.4, CUDA support allowed Visual Assist to recognize CUDA files and parse and highlight them like regular C/C++ files. This feature now enters full supported status and you can reliably use Intellisense-like features for CUDA files.
  • Move Class feature
    Refactoring and moving entire classes can sometimes be a hassle. This feature moves from beta to supported status and allows you to easily choose an entire class and port it over to file/s of your choosing.

Create File: specify a directory + auto implementation.

This is a tiny but useful quality of life change for creating files. Prior to this change, Visual Assist would sometimes display a failure error and ask you if you wanted to Create File or to stop if a target was not found. Now, it runs create file automatically and you can hit Cancel instead.

Furthermore, a bug fix for when using create file: Visual Assist will consistently move the implementation afterwards. (In the past, it sometimes failed to do so.) 

These two changes will hopefully make your experience more seamless and intuitive.

Discord link and feedback options in the Help menu

Introducing our newly opened Discord server for all Visual Assist users. We’re hoping for this hub to function like our forums wherein users can request for changes, report bugs, and share useful information and tips around the plugin.

As it’s a WIP, anyone who is interested in helping us manage and build the community is welcome to do so. Send us a message here if you’re interested.

Furthermore, we’ve added new feedback channels in one of our menus. Navigate to Help and browse new feedback options and let us know what you think!

Bug fixes and improvements

Apart from the above major fixes, we have a couple of minor bug fixes and QoL changes. The complete list is below: 

  • Fixed issue where Move Implementation would not move the implementation if a new file needed to be created.
  • Improved editor performance when editing C#.
  • Fixed Add Include issue where C headers would sometimes be added instead of their C++ counterparts.
  • Fixed issue where Move Class to New File would sometimes not be offered near macros.

Send us a message or start a thread on the user forums for bug reports or suggestions.

Visit our download page to update to the latest release manually. Happy coding!

The post Visual Assist 2024.3 release post first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/visual-assist-2024-3-release-post/feed/ 0 3811
Visual Assist 2024.2 release post https://www.wholetomato.com/blog/visual-assist-2024-2-release-post/ https://www.wholetomato.com/blog/visual-assist-2024-2-release-post/#respond Thu, 28 Mar 2024 18:45:09 +0000 https://www.wholetomato.com/blog/?p=3797 It only has been a minute since the last performance-focused release but Visual Assist 2024.2 is here, squeezing even more performance to set it apart from other coding assistants! Continuing the theme of the last...

The post Visual Assist 2024.2 release post first appeared on Tomato Soup.

]]>
It only has been a minute since the last performance-focused release but Visual Assist 2024.2 is here, squeezing even more performance to set it apart from other coding assistants! Continuing the theme of the last version, this release is focused on getting rid of interruption or downtime, and overall just making the Visual Studio experience as responsive as possible.

Download the release now from our website.

Significantly faster plugin startup time—especially in large solutions.

This update refers to the time it takes for Visual Assist’s features to become functional. Every time you close and open a solution, the plugin’s features take a few moments to load—or at least that’s how it was before. With this update, time-to-functional is more or less instantaneous even in extremely large solutions

As soon as Visual Studio calls on Visual Assist to start loading, you can immediately see coloring, syntax highlighting, and all the navigation and features are accessible. (Note: How Visual Studio initializes plugins and components is indeterminate; results may vary slightly depending on how many components it loads first before Visual Assist.)

This is not to be confused with the initial parse time update that we did in VA 2024.1 which is only a one-time process that happens with each new solution.

Further improvement to our initial parse time.

As mentioned above, we made significant improvement with the initial project parsing. Most of the benefits from 2024.1 were the result of optimizing how Visual Assist goes through files as it traverses references and includes. 

To summarize, Visual Assist used a cache for parsed directories so that it does not have to access the hard disk when an include is referenced multiple times—this significantly reduced the initial parse time.

In 2022.2, however, the developers have squeezed more performance by optimizing smaller items such as string operations, parse logic, etc. This produced a relatively modest but still significant decrease in project parse time.The result is a up to 50% faster parse time versus the previous version. Or in absolute units, that means VA 2024.2 is around 20 seconds faster than VA 2024.1 in our test scenario, where the Lyra demo is now ready in under a minute.

Testing:

Initial parsing time is defined as the point where the Visual Assist starts parsing up to the end where it completes it. This project used the latest Visual Studio 2022 version 17.8.6, again on the Lyra sample game project provided by Epic Games. This is using the same high-end PC and laptop setup used to test the 2024.1 changes.

Setup 1:

Run 1 Run 2 Run 3 Average
Visual Assist 2024.1 01:09 01:05 01:03 01:06
Visual Assist 2024.2 00:54 00:51 00:54 00:53
Setup Specs: AMD Ryzen 7, 7800X3D processor, Team T-Force Delta 32GB (2 x 16GB) 288-Pin PC RAM, Crucial T700 Gen5 NVME M.2 SSD on 

 

Setup 2: 1.19x faster

Run 1 Run 2 Run 3 Average
Visual Assist 2024.1 01:30 01:31 01:27 01:29
Visual Assist 2024.2 01:18 01:15 01:12 01:15
Setup Specs: – CPU: 12th Gen Intel(R) Core(TM) i9-12950HX, DDR5-4800 (2400 MHz) 32 GB (2×16 GB), 2 TB SSD,  ASUS ROG Strix SCAR 17 SE (2022) G733CX laptop on UE 5.2.1 Lyra Game

 

Setup 3: 1.54x faster

Run 1 Run 2 Run 3 Average
Visual Assist 2024.1 02:15 02:02 02:06 02:07
Visual Assist 2024.2 01:28 01:16 01:24 01:22
Setup Specs: – CPU: 12th Gen Intel(R) Core(TM) i9-12950HX, DDR5-4800 (2400 MHz) 32 GB (2×16 GB), 2 TB SSD,  ASUS ROG Strix SCAR 17 SE (2022) G733CX laptop on UE 5.3.2 Lyra Game

Improved add include for Unreal Engine.

Adding includes when working with Unreal projects has been improved in two ways. First, add include formatting in C++ generally uses either alligator brackets or quotation marks. Generally, <> are for system includes and “” are for user includes, however, there is a stylistic convention when working with Unreal. 

This update adds logic such that when you’re adding includes in an Unreal project, Visual Assist will consistently choose quotations—the preferred style for Unreal development.

Second, the include directory that is used when adding includes will now produce more accurate paths. Visual Assist will try to make sense of directory paths, subfolders included. This is especially useful when working with Unreal Engine which is known to arbitrarily produce paths.

Unreal Engine changes how solutions are generated; and while these are not actually used to build your game, these incorrect include directories are still read and used to generate other include paths when adding new includes. VA adds includes perfectly for normal C++ projects, but this situation may pose issues with some UE solutions, because some solutions could have incorrect include paths set up. 

This manifests as very long and unwanted paths, such as this one when adding the player controller: #include “../../../../../../../Source/Runtime/Engine/Classes/GameFramework/PlayerController.h”

Now, VA instead traverses the directory structure and figures out the paths, instead of trusting the solution. We replaced our logic to mostly ignore the include directories given to use by the solution in lue of traversing the directory structure ourselves. This lets us build our own ‘effective’ list of include directories which we will use to generate include paths for new includes.

For the above example, it would now add: #include “GameFramework/PlayerController.h”—which is what you expect and want as a UE developer. 

Fix syntax coloring in C# for Visual Studio 2022.

A recent Visual Studio 2022 update changed an API that Visual Assist uses to provide coloring and syntax highlighting. This update broke Visual Assist’s coloring and syntax highlighting for C#. 

A near total rewrite has been implemented and syntax coloring should be working now. However, there may be a slight difference in how Visual Assist colors C# files as we reoptimize with the rewritten code.

Syntax highlighting and coloring in C++ has remained unaffected but Visual Assist plans on implementing the new API setup for it as well. This should also fix some minor coloring issues. 

Fixed compatibility issues with GitHub Copilot.

Visual Assist is now completely compatible with Copilot, Microsoft’s AI coding assistant. 

Earlier this year, a bug report was filed on our forums describing a situation where Visual Assist seems to be interfering with Copilot’s chat functionality. This has led to the unwanted situation wherein users have to disable either Copilot or Visual Assist, as some features may not work simultaneously.

All known incompatibility issues have been resolved and addressed in 2024.2. If you encounter any similar bugs, please send us a bug report.

Fixed Open File in Solution issue when the filter starts with a dot.

When starting a query with a dot (.), Open File in Solution may sometimes fail to display the expected results. 2024.2 fixed the ‘dot’ filtering which was a common user complaint.

Search filtering features are available by starting with a dot to find files that begin with the filter, or contain the dot and substring. A filter that ends with a dot matches the ends of file names. For example “string.” finds files whose base names end with “string”. This dot filtering is also possible in other dialogs of Visual Assist that support filtering.

Bug Fixes & General Improvements

Apart from the above major fixes, we have a couple of minor bug fixes and QoL changes. The complete list is below.

  • Fixed UI conflict with GitHub Copilot.
  • Fixed issue where Add Include would sometimes not add the new include.
  • Fixed long Add Include paths for some symbols in Unreal Engine 5.3.x.
  • Fixed issue where Open File in Solution would sometimes not display results when the filter starts with a dot.
  • Fixed issue where C# syntax coloring would not be applied in Visual Studio 2022 17.9.0.
  • Fixed issue where readability-magic-numbers Code Inspection would not properly underline hex numbers.
  • Fixed issue where GoTo would not navigate to classes without a constructor.
  • Fixed issue where suggestions could show suggestions for non-existent types.
  • Updated Create Account link to point to the correct page.
  • Added Alt+O to Recommended Keyboard Shortcuts as Visual Studio 2022 now uses that binding.

Send us a message or start a thread on the user forums for bug reports or suggestions.

Visit our download page to update to the latest release manually. Happy coding!

The post Visual Assist 2024.2 release post first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/visual-assist-2024-2-release-post/feed/ 0 3797
Visual Assist 2024.1 release post https://www.wholetomato.com/blog/visual-assist-2024-1-release-post/ https://www.wholetomato.com/blog/visual-assist-2024-1-release-post/#respond Wed, 31 Jan 2024 09:32:52 +0000 https://www.wholetomato.com/blog/?p=3725 The first release of the year is here with Visual Assist 2024.1. This update is headlined by the overhaul of our parser, which significantly reduces users’ initial startup times for projects. Also in this release:...

The post Visual Assist 2024.1 release post first appeared on Tomato Soup.

]]>
The first release of the year is here with Visual Assist 2024.1. This update is headlined by the overhaul of our parser, which significantly reduces users’ initial startup times for projects. Also in this release: key behavioral fixes for a few or VA’s navigation features, a UI update for the ubiquitous dropdown toolbar, and a plethora of bug fixes and QoL improvements.

Download the release now and get the benefits of VA 2024.1.

Significantly faster initial startup time

va initial parse startup speed update

Initial parse time is defined by how long it takes Visual Assist and Visual Studio to become fully active, starting from the moment a new file is loaded up for the first until it fully completes its initial parse (i.e. all features loaded and functional.)

Startup times just got extremely buffed in the first release of Visual Assist this year. The initial project parsing that Visual Assist executes when opening projects for the first time has now been significantly reduced. An example Unreal Engine project, when opened for the first time, used to take 15 minutes; it now takes just under two minutes instead! This is a huge improvement, and you will see this reflected in all projects that are opened and parsed.

More testing is underway to provide a better and more accurate performance number, but the developer team has found excellent results in their tests so far. Reports show a trend of having significantly reduced parse time for a sizable Unreal Engine project—with results averaging up to fifteen times faster initialization.

Update on Initial Parsing Time: More Testing Results

More testing results for Visual Assist’s updated parser are in! Here are the results:

Initial parsing time is defined as the point where the Visual Assist starts parsing up to the end where it completes it. This project used the latest Visual Studio 2022 version as of Feb 10 (VS 2022 17.8.6) on the Lyra sample game project provided by Epic Games. Two performance benchmarks on two different devices were done using the same methodology. 

Device 1  (High-end Desktop PC)

Run 1 Run 2 Run 3 Average
Visual Assist 2024.1 0:01:13 0:01:05 0:01:06 0:01:08
Visual Assist 2023.6 0:11:55 0:11:57 0:12:42 0:12:11

Device 2 (Gaming-class laptop)

Run 1 Run 2 Run 3 Average
Visual Assist 2024.1 0:02:12 0:02:17 0:02:10 0:02:13
Visual Assist 2023.6 0:29:37 0:28:52 0:30:09 0:29:33

Both test runs show very exciting results for the overhauled VA 2024.1 parser over its immediate predecessor VA 2023.6.

The test showed an average 1075% faster parse time using a high-end desktop PC; and 1333.08% faster parse using a powerful albeit relatively less performant gaming laptop. That’s 11 and 13 times faster, respectively. 

There is variance in the advantages gained between the two devices, with a significant performance edge on the less powerful laptop. We suspect the gains could be much larger on low and mid-end computers or laptops.

Curious to see how VA 2024.1 performs on your platform? Download a free trial of Visual Assist and try it for yourself now.

Navigate directly to a class constructor definition from an explicit constructor call

This neat addition to VA’s find reference and go to reference features allows users to find and navigate to a class’s constructor definition from a call to that constructor. 

Highlight or click over a constructor and use the shortcut Alt + G to navigate instantly to the default constructor.

Improved and expanded header selection when using Add Include

This release greatly improves VA’s Add Include detection and expands the number of actual includes supported. 

If you have not used this feature extensively VA can automatically add includes directives for you if it detects you are using an undeclared feature or type from a known library such as STL or even your own code elsewhere. Specifically, this update adds many new types baked into C++ such as std::stringstream and std::once_flag.

In essence, using Add include should automatically insert the correct include under many more circumstances.

Code completion dropdown toolbar now displayed by default 

The coding completion toolbar will now be turned on by default and will be displayed more frequently. This new quality of life change brings a visual UI as you write code. VA tries to predict your intended actions so the options shown will always be contextual apart from being accurate.

Furthermore, when you type code, the code completion UI will be shown by default regardless if you are hovering your cursor over the current portion of the code.

code completion toolbar visual assist

The code completion toolbar is displayed as you type code.

For very large projects and long source code, you can use the filter options (highlighted in the screenshot above) to select which options are shown in the new toolbar.

Bug fixes and improvements

For this release, we have severak fixes—both from examining recent features and user reports. The most notable of these improvements include functional visual changes to a plethora of features and better parser recognition of Unreal code.

  • Fixed visual issues with completion dropdown toolbar
  • Fixed issue where trial activation dialog could display an error and prevent activation
  • Fixed issue with new “Magic Numbers” detecting Code Inspection where it was highlighting only a portion of the constant
  • Fixed issue where logging could overflow and cause a crash when enabled alongside very large solutions
  • Fixed issue where preprocessor directives in shader files were sometimes colored as methods
  • Fixed issue where Unreal Engine Create***Subobject symbols were not recognized by our parser
  • Fixed issue where changing the signature of an Unreal Engine method which requires a *_Validate thunk would result in rewriting the return of the *_Validate thunk to void.
  • Fixed issue where typing a dot the start of the word in a few of our dialogs would result in no hits being displayed

Many thank to those who submitted their suggestions and error reports. Please continue reporting problems you may find along the way. To report bugs, you can send us a message or start a thread on the user forum.
You can also check our download page to update to the latest release manually. Happy coding!

The post Visual Assist 2024.1 release post first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/visual-assist-2024-1-release-post/feed/ 0 3725
Ensuring Code Quality: Why Every C++ Developer Needs Unit Tests https://www.wholetomato.com/blog/ensuring-code-quality-why-every-c-developer-needs-unit-tests/ https://www.wholetomato.com/blog/ensuring-code-quality-why-every-c-developer-needs-unit-tests/#respond Tue, 31 Oct 2023 23:09:59 +0000 https://blog.wholetomato.com/?p=3469 Modern programming languages evolve and are continuously refined even further with each new update. During these incremental stages of development, components such as compilers, IDEs, libraries, their units, their components, and tools undergo changes. Furthermore,...

The post Ensuring Code Quality: Why Every C++ Developer Needs Unit Tests first appeared on Tomato Soup.

]]>

Modern programming languages evolve and are continuously refined even further with each new update. During these incremental stages of development, components such as compilers, IDEs, libraries, their units, their components, and tools undergo changes.

Furthermore, there are also rapid changes in operating systems and hardware systems. This means that if you are developing applications professionally, you must test your units or components at the beginning of your main application development to make sure it is compatible with the newly released versions. 

C++ is a very powerful and modern programming language but to keep up with ever changing industry demands, standard language practices and conventions constantly change. Thus, C++ applications need be to be regularly maintained by software developers and engineers.

All these require rapid unit testing to ensure that company and tech requirements are met using better memory management and improved runtime performance of the main application. Let’s learn more about unit testing and how it’s used to maintain source code.

Why do developers need to test C++ Code?

Developers need to test C++ code for various crucial reasons. First and foremost, testing is a fundamental means of detecting and addressing bugs, errors, and issues within the codebase. By running tests, developers can catch problems early in the development process, saving time and resources. Additionally, testing ensures that the software behaves as expected and meets its requirements and specifications. 

It also plays a pivotal role in regression prevention, safeguarding existing functionality as code evolves. Moreover, tests serve as documentation, providing examples of code usage and clarifying its intended behavior, making it easier for developers to understand and work with the code. Testing encourages good coding practices, promoting modularity and maintainability. It facilitates collaboration by allowing multiple developers to work on a project with confidence. 

But more importantly, performance is another aspect that can be tested and memory and CPU usage are really important in C and C++ programming—it’s one of its main strengths. Ensuring the safety of your product and minimizing memory usage is crucial for achieving reliability and usefulness, as it also impacts the performance of your application during runtime. Increased CPU usage can lead to slower operations, making your app lag behind those of your competitors. This will result in higher energy consumption and higher battery usage in mobile applications that are not liked by users. 

However, this performance comes at a cost. C++ is considered to be a little harder than other programming languages because you need to have a solid grasp of how to manage and use memory. Furthermore, C++ can be extended with headers of libraries, units, and components giving it even more complexity. Consequently, diagnosing problems and tracking issues in C++-based applications require more skill and know-how. Launching a version of an application without testing can lead to unwanted outcomes such as the dreaded blue/black windows screen. Failure to properly test code can also lead to random performance drops, higher CPU usage, and unoptimized energy consumption.

For example, using multi-threading development skills on the CPU, GPU, and memory operations is important in programming, but it can give rise to problems in synchronization of multi-threaded operations and accessing data for reading and writing. Thus, multi-threading functions/methods or classes/libraries should be rigorously tested before general availability. In C++, testing multithreaded and parallel-programming application codes requires more professional skills than testing traditional C++ code.

Nowadays most of these problems can be monitored by tools baked into operating systems, which means it is easier to detect issues on runtime. Thus, there should be minimal reasons to publish and use untested applications. C++ developers should test their codes and other codes embedded into the main code.

What is unit testing and why do I need to use it?

Unit testing is a technique for developing and  experimenting with software applications that focuses on individual units or components of a main software application. This process seeks to validate whether each unit or component meets the project requirements.

Generally, unit testing is applied in the early stages of the development process before any of the code is released as an alpha or beta release. Every unit or component needs to be updated based on the requirements  of the operating systems or with the coding  standards and conventions of a language.

Generally, tests do not require technical developer skills, but in some tests, there may be more precise work that can lead to better informative results.  These tests involve computational, multi-tasking applications such as AI applications or other computational engineering applications. These applications are mostly based on C/C++ codes or  are using languages that are related to C++ modules or libs, such as Python, Delphi, etc.

Unit testing is also applied to test different versions of units or components of a software system. Sometimes new versions of units may not fit your requirements or may cause problems during the runtime of your applications. Here are some of the problems:

  • lower performance issues
  • higher memory or CPU usage issues
  • graphical issues
  • crash issues (rare)
  • Random freezes on runtime. 

If there are problems in your main applications, it may be hard to define which unit or component causing this kind of issue. This is why unit testing is important in the early stages of development. Thus, the developer or the dev team determines whether these tested units and/or components are suitable for use or not.

Here are few bad excuses example for not doing unit tests and some tips on how to possibly address them:

Testing our software is too difficult!

  •  try to redesign or refactor
  • try to decouple
  • try TDD which helps ensure a cleaner design

We can’t test now. We are too pressed for time.

  • Technical debt accrues and bugs are more time consuming in the long run
  • prioritize, but make sure you test the most relevant partsHere are few bad excuses example for not doing unit tests and some tips on how to possibly address them.

Unit testing may be applied manually for some specific unit or components or they can be automated for general purposes to test some parts of units. These tests are applied on run-time when the code is changed to ensure that the new code does not break existing functionality.

Unit tests are generally small codes that have a unit or component and are applied to validate possible units of code. This code may be using a function or a method of a class or library. These are tested in isolation from the main software system. Thus, developers may test these units to identify possible problems and they may find a way to fix these problems in the early stage of the development process. This improves the overall quality of the main software application and reduces bugs, and issues in the main software, and reduces the time required for later testing.

Executing basic unit testing

How can I do better unit testing and code maintenance in C++?

When you do unit tests there are many C++ features to help maintain your code. Here are some features included in the Visual Assist C++ productivity plugin that can be used in application development, when testing a unit, or using a unit in the main application:

Visual Assist is one of the definitive plugins that conceptualized and shaped most of the current features you see now in Visual Studio. And to this day it continues to develop user-centric design for maximum productivity and usability.

 

 

 

 

 

 

The post Ensuring Code Quality: Why Every C++ Developer Needs Unit Tests first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/ensuring-code-quality-why-every-c-developer-needs-unit-tests/feed/ 0 3469
Summer CodeFest: Magnificent or Malevolent: Maps! Measured, Monitored, & Magnified! [Mrecap] https://www.wholetomato.com/blog/summer-codefest-magnificent-or-malevolent-maps-measured-monitored-magnified-mrecap/ https://www.wholetomato.com/blog/summer-codefest-magnificent-or-malevolent-maps-measured-monitored-magnified-mrecap/#respond Sun, 27 Aug 2023 13:42:03 +0000 https://blog.wholetomato.com/?p=3380 Webinar overview:  Std::maps is a staple in the C++ world for sure. It’s reliable and useful, but in this presentation, David Millington goes a level deeper and examines how other features offered beyond the standard...

The post Summer CodeFest: Magnificent or Malevolent: Maps! Measured, Monitored, & Magnified! [Mrecap] first appeared on Tomato Soup.

]]>
Webinar overview: 

Std::maps is a staple in the C++ world for sure. It’s reliable and useful, but in this presentation, David Millington goes a level deeper and examines how other features offered beyond the standard library can be used to maximize the usefulness of the data structure. 

Quick Refresher on Maps

Maps are essentially a way to store key-value pairs in an ordered structure. This creates an associative array that can be used to lookup connected pieces of data. Maps are ubiquitous. Value-key lookup is used everywhere: filenames to files, index number to row/column, ID number to name, and the list goes on. 

Maps can be ordered or unordered. They are also similar to static arrays and vectors but they possess a few key differences such as memory management, performance, and appropriate types—watch this section of the webinar to learn more.

Things to Remember from the Webinar

 

Slide Deck Presentation

Replay

The post Summer CodeFest: Magnificent or Malevolent: Maps! Measured, Monitored, & Magnified! [Mrecap] first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/summer-codefest-magnificent-or-malevolent-maps-measured-monitored-magnified-mrecap/feed/ 0 3380
Summer Codefest: Lambdas go Baa! [Recap] https://www.wholetomato.com/blog/summer-codefest-lambdas-go-baa-recap/ https://www.wholetomato.com/blog/summer-codefest-lambdas-go-baa-recap/#respond Mon, 21 Aug 2023 17:56:22 +0000 https://blog.wholetomato.com/?p=3365 Webinar overview:  This presentation by product manager, David Millington, talks about the convenient way to define an anonymous function object added in C++11. This topic was chosen because while it’s extremely useful, the data we...

The post Summer Codefest: Lambdas go Baa! [Recap] first appeared on Tomato Soup.

]]>
Webinar overview: 

This presentation by product manager, David Millington, talks about the convenient way to define an anonymous function object added in C++11. This topic was chosen because while it’s extremely useful, the data we see is that there tend to be two groups of C++ developers: those who use them extensively, and those who barely use them.

When to use lambdas:

The main benefits of using lambdas are:

  • Improves readability for you or your team.
  • Anonymity makes them easier to maintain (no names needed for smaller functions/functors).
  • Localizes functions to your code.

Furthermore, lambdas are especially useful if your logic goes inside something else. These code layering problems are a nuisance to reading code—lambdas make it easier to “localize” logic.

Comparing lambdas with traditional functor

A comparison between a sort functor written in traditional structure vs a lambda.

On the left is a standard functor with structs and operators written traditionally. It works and functions just as a lambda would but it is longer and arguably more difficult to comprehend when viewed in the context of actual source code.

On the other hand, a lambda is seen as significantly shorter and easier to read. With the structure of a lambda, the code being called is emphasized directly after the functions. The syntax is also unmistakable; just look for the following method syntax:

  • [ ] – capture state
  • ( ) – function
  • { } – body of method

Skip to 18:18 of the replay to learn more about lambda syntax and how to structure inline functions.

Summary: other tips for using lambdas

Here are a couple of other things you should look out for according to the presentation:

Slide Deck Presentation

Replay

The post Summer Codefest: Lambdas go Baa! [Recap] first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/summer-codefest-lambdas-go-baa-recap/feed/ 0 3365
Summer CodeFest: Modern C++ with Modern 3D [Recap] https://www.wholetomato.com/blog/summer-codefest-modern-c-with-modern-3d-recap/ https://www.wholetomato.com/blog/summer-codefest-modern-c-with-modern-3d-recap/#respond Sun, 20 Aug 2023 06:27:57 +0000 https://blog.wholetomato.com/?p=3346 Webinar overview: 3D Graphics in C++ Dr. Yilmaz Yoru shares his knowledge on graphics, as well as its counterpart analyzers and calculations used in 3D C++. He uses C++ Builder for most of his examples...

The post Summer CodeFest: Modern C++ with Modern 3D [Recap] first appeared on Tomato Soup.

]]>
Webinar overview: 3D Graphics in C++

Dr. Yilmaz Yoru shares his knowledge on graphics, as well as its counterpart analyzers and calculations used in 3D C++. He uses C++ Builder for most of his examples but almost any compiler can be used for the projects he demoed. Check out his website and other projects here.

Why use C++ for 3D

C++ is one of the top options if you are working with 3D graphics for the same reason you use it in embedded systems and high frequency trading—speed and performance. 

Any programming language can execute basic 2D graphics, but if we want to display 3D graphics in real-time (e.g. 3D simulations or rendering for video games),  then a language and environment that runs fast is essential. Furthermore, C++ also provides support for some of the most popular 3D libraries available such as OpenGL (GLUT) or Direct3D libraries. 

Features of C++ used in 3D

There are a couple of useful features in C++ that can be used in general programming as well as 3D work. Watch the session to grasp the fundamentals of these features and how it can be used in 3D C++. Some of the features included:

  • Class features (constructors, move, copy, move operator, etc.)
  • std::array
  • std::vector
  • std::map
  • lambdas
  • templates
  • unique_ptr
  • std::sort

We have other sessions in the Summer CodeFest that talk about some of these features such as lambdas and templates. Visit our blog to find them.

Color Management and Color Applications

[In modeling for 3D,] Pixels are the real graphics.

Graphical work in 2D/3D is primarily managing how the colors of pixels change and the underlying mathematics that decides when and how these changes happen. 

The bulk of the work is computational and applied mathematics. Determining how pixels change will rely on complex mathematical models. And consequently in modern 3D, programmers must find a way to visualize gigabytes worth of numbers. Fortunately, the C++ features mentioned earlier can greatly simplify this as you are working more closely with the actual data and memory—and the results can be shown in real time too.

As discussed earlier, 3D work is fundamentally the crunching of numbers using appropriate mathematical operations and models. Here are some examples shared in the presentation:

  • Rotation matrices (used in 3D projection, vectors, robotics)
  • Euler formulas (shows how a single axis parameter in 2D can be rotated to create 3D shapes)
  • Quaternions (used to describe orientation or rotations in 3D space using an ordered set of four numbers)
  • Octonions, rotors, and beyond (even more complex scalar and complex vectors using advanced algebra)

 

Slide Deck Presentation

Please email Dr. Yilmaz for his copy of his presentation slides.

Replay


You can also find Dr. Yoru’s website here.

The post Summer CodeFest: Modern C++ with Modern 3D [Recap] first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/summer-codefest-modern-c-with-modern-3d-recap/feed/ 0 3346
Visual Assist 2023.4 now released https://www.wholetomato.com/blog/visual-assist-2023-4-released/ https://www.wholetomato.com/blog/visual-assist-2023-4-released/#respond Thu, 17 Aug 2023 20:44:35 +0000 https://blog.wholetomato.com/?p=3336 VA 2023.4 is now published and is now available to download!  This release marks a major milestone in Visual Assist’s history as it starts its official support for Unity engine development. Also in this release:...

The post Visual Assist 2023.4 now released first appeared on Tomato Soup.

]]>
VA 2023.4 is now published and is now available to download

This release marks a major milestone in Visual Assist’s history as it starts its official support for Unity engine development. Also in this release: start of support for CUDA development for C/C++ and numerous parser improvements. Read on further to get the complete details of the changes and improvements in this release.

Start of official support for Unity

It’s been a long time coming but Whole Tomato is glad to announce that the upcoming 2023.4 build will feature the first of many Unity-specific features. Nope, not the hivemind—we are of course talking about the very versatile game engine and game development platform.

For those unaware, the Unity engine is the backbone of both 2D and 3D games ranging from wildly popular and suspicious games, all the way to full blown highly-acclaimed triple A titles.

Visual Assist has been popular for helping game developers deal with complex C++ code. Starting from the upcoming release, Visual Assist will expand its focus to C# game development. Users can expect VA staples such as refined navigation, intelligent autocomplete, code refactoring, and the like to work as well for C# work.

Furthermore, users can also submit feature requests specific for Unity development. We are starting with shaders—more on this below—but if you have any suggestions as to what features are missing in your Unity development, do let us know by emailing support.

Shaders for Unity

The start of official support for Unity development is headlined by shader file support. Similar to our previous addition of supporting HLSL, we are kicking off Unity updates by adding its shader files to our list of supported languages.

CUDA C/C++ Development

If you are a data scientist, software engineer, or a plain hobbyist looking to harness the power of your GPU for general purpose programming tasks, then you would most likely know about Compute Unified Device Architecture (CUDA). This programming model developed by Nvidia allows programmers to utilize the multi-core performance of graphics cards for other non-graphic applications (although it’s perfectly fine to use for 2D/3D too!)

If you are interested in CUDA, then rejoice! VA 2023.4 also marks the start of official support for CUDA development. Visual Assist’s can now parse and analyze CUDA related syntax, libraries, and APIs so you can have IntelliSense-like features, navigation, and highlighting for CUDA (.cu) files.

A CUDA file with proper syntax highlighting and code analysis features.

Parser Improvements: template functions with auto / trailing return type and std::tuple autocompletes 

With VA 2023.4 will now properly highlight and parse trailing return type features that bypasses a C++ limitation where the return type of a function template cannot be generalized if the return type depends on the types of the function arguments. This release specifically deals with some of the edge cases reported by our users.

Trailing return type features can be used by declaring a generic return type with the auto keyword before the function identifier, and specifying the exact return type after the function identifier. Learn more about it here.

The parser is aware of sum and proper syntax highlighting and navigation features are applied.

Also fixed in this release are initializations of std::tuple autocompletes. This improves how the VA parser handles certain templated types. In the end, users will find better completion suggestions when you are typing in your codebase, such as when typing std::tuple.

Better Add Include logic

Visual Assist can add include directives for headers that resolve unknown symbols in the current C++ source file. The underlying logic for add include has been improved for better context-awareness resulting in better predictions on where to place the new include.

Add include now inserts new lines in most logical place.

Add include can be accessed by hovering over unknown symbols and opening the quick actions and refactoring menu ( Shift + Alt + Q ).

Some other spring cleaning-type improvements

We’ve also made some changes to a few minor things to the UI and the options in the app that you should know about. Firstly, our shader support has been available for a few rounds of releases already and we’re excited to announce that it has finally finished its beta phase and will now be enabled by default. 

Secondly,  we’ve streamlined our game Development tab of our options dialog. This is to make room for upcoming additions (stay tuned!)

Thirdly, we’ve tweaked some tomatoes and icons along the way to better respond to your actions and better display what options are available to you. Relevant options and menus will be emphasized when they are needed; secondary options will subtly fade into the background otherwise. This is in line with our commitment to distraction free coding.

Lastly, if you’ve missed or haven’t installed the latest version yet, you may have noticed that the Visual Studio marketplace listings for the 32 and 64-bit versions of Visual Assist have now been combined. Versions 2010 – 2022 will now be accessible from one listing.

Bug Fixes

  • Fix for ‘VaMenuPackage’ package error affecting VS2022 17.7.0 3.0 load
  • Fixed issue where some types with leading macros before template definitions were not parsed correctly.
  • Fixed issue where autocomplete of some types, such as std::tuple, would produce partial results.  
  • Fixed rendering of suggestion list tomato icons in Visual Studio 2022. 
  • Fixed issue where the VA Navigation Bar could become smaller than intended.
  • Fixed Code Inspections error that could happen in some cases in Visual Studio 2022 17.6+. 

Thanks to those who submitted their feedback and bug reports. Keep ‘em coming. Send us a message or start a thread on the user forums for bug reports or suggestions.

Contrary to the preview blog statement, VA 2023.4 is a bit different as it will be released simultaneously—no rolling release mechanism as it includes some crucial updates we want to share to everyone as fast as possible. You can also check our download page to manually update to the latest release too. Happy coding!

 

The post Visual Assist 2023.4 now released first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/visual-assist-2023-4-released/feed/ 0 3336