Tomato Soup https://www.wholetomato.com/blog Visual Assist Team Blog Fri, 15 Aug 2025 10:42:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://www.wholetomato.com/blog/wp-content/uploads/2025/05/favicon.ico Tomato Soup https://www.wholetomato.com/blog 32 32 227787260 Do I Need To Know C++ For Unreal Engine? The Updated 2025 Guide https://www.wholetomato.com/blog/do-i-need-to-know-c-for-unreal-engine/ https://www.wholetomato.com/blog/do-i-need-to-know-c-for-unreal-engine/#comments Thu, 24 Jul 2025 22:33:54 +0000 https://blog.wholetomato.com/?p=2409 Quick Answer: While C++ isn’t strictly required for Unreal Engine development thanks to Blueprint visual scripting, learning it unlocks advanced capabilities and significantly expands your development options. For beginners, you can start with Blueprints and...

The post Do I Need To Know C++ For Unreal Engine? The Updated 2025 Guide first appeared on Tomato Soup.

]]>
Quick Answer: While C++ isn’t strictly required for Unreal Engine development thanks to Blueprint visual scripting, learning it unlocks advanced capabilities and significantly expands your development options. For beginners, you can start with Blueprints and gradually learn C++ for Unreal Engine as your projects grow more complex.

Nintendo switch uses C++

C++ is used to program and create video games on different platforms.

What This Guide Covers

Whether you’re a complete beginner or transitioning from another engine, this comprehensive guide answers the most common questions about C++ and Unreal Engine development. You’ll learn when C++ is necessary, what alternatives exist, and how to make the best choice for your project goals.

The Short Answer: Blueprints vs C++

You can absolutely create games in Unreal Engine without knowing C++. Unreal’s Blueprint visual scripting system allows you to build complete games using a node-based visual interface instead of traditional code. Many successful indie games have been built entirely with Blueprints.

However, C++ becomes valuable when you need:

  • Maximum performance optimization
  • Complex gameplay mechanics
  • Custom engine modifications
  • Integration with third-party libraries
  • Advanced AI systems

However to get the most out of UE and improve at the fundamentals, you should not be using blueprints or C++ exclusively. Ideally, you should learn how to use both. If you want to learn more about C++ vs Blueprints , we’ve discussed in another article about when to use Blueprints or C++ when developing games.

Is Unreal Engine good for beginners?

Unreal Engine is a great game engine for beginners as it provides access to a lot of templates and assets completely for free (unless your game earns >$1M gross annually) . However, it is also expansive and powerful enough for experienced developers as well. If you are familiar with other platforms, such as Unity or previous Unreal Engine versions, you will be able to jump right in and start creating video games using Unreal Engine C++. A virtual game and graphic studio that specializes in Unreal Engine C++ development can also be a great resource for learning the language and developing your skills.

The process of developing a game with Unreal Engine is not difficult to understand, but it does require a lot of time and practice, knowledge of the language, and commitment. And one of the very first questions is: where do I begin?

Do you need to know how to code for Unreal Engine?

Creating entire games with Unreal Engine can be a daunting task, but with the right knowledge and skills, you can make amazing programs. Some basic knowledge of coding—and C++ to an extent—is required, but it is not necessary to be an expert. The Unreal Engine is not just intended for developers but also for creators; and a game programmer is not limited to working with Unreal Engine.

It is even possible to create full-fledged games without any coding background. Popular gaming engines like Unity or Unreal Engine offer visual scripting tools or no-code solutions for managing game assets. Unreal has its Blueprint scripting process wherein you can use nodes to replace normal programming logic.

But if you want to dive into the nitty gritty, learning about the fundamental language of which the engine is based on a surefire way to greatly increase both your options and your efficiency. Additionally, many other game development platforms, such as Unity and GameMaker, use similar coding languages. Knowing how to code for these platforms will help you get started in the game development industry.

Learning Path Recommendations for Complete Beginners

  1. Start with Blueprint Fundamentals
  2. Learn Basic C++ Outside Unreal
    • Master fundamental programming concepts
    • Practice with simple console applications
    • Understand object-oriented programming principles
  3. Transition to Unreal C++
    • Start with simple C++ components
    • Gradually replace Blueprint functionality with code
    • Learn Unreal-specific C++ conventions and macros

READ MORE: Install and set up Unreal Engine with Visual Studio.

When is C++ essential then?

C++ coding becomes essential when you’re dealing with specific use cases and the blueprints system is not sufficient anymore.

• Performance-Critical Applications

C++ provides direct memory management and system-level control that Blueprint scripting cannot match. For AAA games, VR experiences, or applications requiring 60+ FPS with complex systems, C++ often becomes necessary.

• Advanced Game Systems

While Blueprints excel at prototyping and standard gameplay, certain advanced features require C++ implementation:

  • Custom rendering pipelines
  • Specialized physics calculations
  • Multi-threaded operations
  • Platform-specific optimizations

• Professional Development

Most professional game studios expect C++ knowledge for Unreal Engine positions. Understanding both Blueprint and C++ makes you more versatile and employable in the game development industry.

• Custom Gameplay Mechanics

With C++, you can implement complex gameplay logic that goes beyond what is possible with Blueprints. This includes creating custom character controllers, AI behaviors, and game rules.

• Creating components and 3D environments

Components are the basic building blocks of Unreal Engine. Components can be used to create 3D environments, menus, and other user interface elements. These components can be exported to other platforms.

• Advanced AI Systems

Create sophisticated AI systems using C++ for behavior and decision-making processes for non-player characters (NPCs) and other game elements such as custom pathfinding algorithms, decision-making systems, and behavior trees.

• Create logic and integrate with scripts

Logic is the code that controls how players interact with each component. Scripts are a special type of code that is more visual. Using both C++ and scripting for Unreal  allows for seamless development in their games.

• Test and debug games

Testing and debugging games is an important part of the game development process. When you work with mechanics created using C++, verifying that component will most likely require C++ knowledge as well. Problems that can be debugged include crashes, missing textures, and incorrect game logic.

Blueprint vs C++ Performance Reality

When Performance Differences Matter

The performance gap between Blueprint and C++ varies significantly by use case:

  • UI and Menu Systems: Minimal difference
  • Simple Gameplay Logic: Negligible impact for most games
  • Heavy Calculations: C++ shows clear advantages
  • Frame-Critical Systems: C++ often necessary for consistent performance

Hybrid Approach Benefits

Most successful Unreal projects use both systems strategically. Learn more here.

  • Blueprints for: UI, game flow, designer-friendly tweaking
  • C++ for: Core systems, performance-critical code, complex algorithms

Development Environment Setup

If you’ve decided to learn C++ for Unreal Engine, it’s best to take the best equipment on your journey!

Recommended Tools

Primary IDE: Visual Studio is our top choice due to the following:

  • Access to Visual Assist for enhanced C++ IntelliSense and navigation
  • Accessible for learning due to free community edition
  • Unreal Engine integration extensions
  • Version control integration (Perforce or Git)

Optimization for Productivity

Modern development requires efficient tooling. Visual Studio’s default C++ support, while functional, can feel limited when working with Unreal’s complex codebase. Supplementary tools like Visual Assist significantly improve:

  • Code navigation and search capabilities
  • Enhanced syntax highlighting for Unreal macros
  • Improved auto-completion and error detection
  • Better refactoring tools for large codebases

Common Beginner Mistakes to Avoid

• Overcommitting to One Approach

New developers often choose either Blueprint-only or C++-only approaches. The most effective strategy combines both systems based on specific needs.

• Ignoring Optimization Early

While premature optimization is problematic, understanding performance implications from the start prevents costly rewrites later.

• Neglecting Documentation

Unreal Engine’s documentation is extensive. Regularly consulting official docs, community forums, and example projects accelerates learning significantly.

READ: Industry Perspective: What Game Studios Expect From You

Making Your Decision

Choose Blueprint-First If You:

  • Are new to programming or game development
  • Want to see results quickly and stay motivated
  • Focus on design and creative aspects over technical implementation
  • Plan to work primarily on smaller or indie projects

Prioritize C++ Learning If You:

  • Have existing programming experience
  • Aim for positions at larger game studios
  • Want maximum control over performance and implementation
  • Plan to work on technically demanding projects

Conclusion: Your Path Forward

The question isn’t whether you need C++ for Unreal Engine—it’s about understanding when each tool serves your goals best. Blueprint provides an excellent entry point that can take you surprisingly far, while C++ offers the power and flexibility for advanced development.

Start with Blueprint to build confidence and understanding of game development concepts. As your projects grow in complexity and your skills develop, gradually incorporate C++ where it provides clear benefits. This progressive approach ensures you’re always working with tools appropriate to your current skill level while building toward more advanced capabilities.

Remember that both Blueprint and C++ are valuable skills in the modern game development landscape. The most successful Unreal Engine developers understand both systems and use them strategically to create engaging, performant games.

Next Steps:

  • Download Unreal Engine and complete the official Blueprint tutorials
  • Join the Unreal Engine community forums and Discord
  • Start with simple projects and gradually increase complexity
  • Consider supplementing your IDE with productivity-enhancing tools like Visual Assist

The journey from Blueprint beginner to C++ expert takes time, but each step opens new creative and professional possibilities. Your games—and your career—will benefit from this comprehensive skill set.

Highly Recommended for Unreal C++ 

If you do decide to code using C++ for Unreal Engine, you will most likely download Visual Studio, the official IDE of choice for developing C++ games in Unreal Engine. It provides an extensive list of navigations, refactoring, auto-suggestions and syntax highlighting for C++ development.

However, Visual Studio also caters to C/C# and unfortunately, the support and tooling for C++ may seem relatively weaker at first glance. Furthermore, Unreal Engine has bespoke coding elements and syntax. This may lead to frustrations when developing Unreal C++ games in the IDE because some basic navigations and features such as syntax highlighting may be unresponsive, or may be unavailable completely.

For these cases, it is highly recommended to install a supplementary plugin like Visual Assist which improves the overall IDE experience and replaces the frustrating elements with tailored features made for C++ Unreal Engine development. It makes the IDE features responsive and adds “understanding” so that basic features such as code highlighting, search, and auto-suggestions work properly.

The post Do I Need To Know C++ For Unreal Engine? The Updated 2025 Guide first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/do-i-need-to-know-c-for-unreal-engine/feed/ 1 2409
Visual Assist 2025.3 release post https://www.wholetomato.com/blog/visual-assist-2025-3-release-post/ https://www.wholetomato.com/blog/visual-assist-2025-3-release-post/#respond Thu, 03 Jul 2025 19:19:45 +0000 https://www.wholetomato.com/blog/?p=4262 Visual Assist 2025.3 is now public and available to download.  This release improves developer experience by updating the feedback UI when using some of our added features from recent releases. We’ve also updated our options...

The post Visual Assist 2025.3 release post first appeared on Tomato Soup.

]]>
Visual Assist 2025.3 is now public and available to download. 

This release improves developer experience by updating the feedback UI when using some of our added features from recent releases. We’ve also updated our options dialog’s look and feel alongside some of line highlighting options. We’ve also fixed many of the bugs and issues based on user reports.

The highlight of this release is a new option when using VA’s extract method so you can now fine-tune the parameter list—which includes selecting variables, excluding unnecessary ones, or arranging their order. 

On the visual feedback side, we’ve enhanced the popup interface when using Replace Auto With Exact Type. Additionally, macros expansion will also have its context revealed upon hovering. Learn more about these changes by going through our release blog post.

Download the release now by visiting our website download page.

Enhanced Extract Method with parameter customization

Visual Assist’s Extract Method feature now offers full parameter customization through an intuitive dialog interface. When extracting code into a new method, developers can now:

  • Add, remove, or reorder parameters before the method is created
  • Modify function signatures using natural coding language syntax
  • Make extracted methods more general by adding custom parameters

This enhancement skips most of the post-extract method editing, instead, a smarter interface guides you to adjust the extracted method as Visual Assist creates the implementation.

This is unlike most rigid UI implementations found in other tools. Visual Assist uses its intelligent parsing to understand your code modifications, providing a more natural and flexible experience.

New editing options for extract method. Edit name, move, or reorder parameters.

How it works: Select code you want to extract, choose Extract Method under the quick actions menu, and customize the function declaration in the dialog using standard C++ syntax. Use VA’s updated UI to create the optimized method accordingly.

Macro Expansions on Hover (Quick Info)

This was added based on a request from a user who was developing in Unreal Engine (UE) in Visual Studio. Many UE users turn off the built-in IntelliSense and just rely solely on VA’s features in order to maximize performance on large codebases—which is usually associated with the size of Unreal projects. Unfortunately, this also means that the macro expansion info provided by IntelliSense is also removed.

With this new change, however, VA can now display macro expansions instantly when you hover over macro definitions, providing immediate insight into complex preprocessor directives without interrupting your workflow.

Hover over macro definitions to show its expansion instantly.

Improved dot to arrow conversion now supports for auto pointers

VA’s dot-to-arrow conversion automatically changes . to -> when accessing members through pointers, eliminating the need to manually switch between dot and arrow operators.

With this update, however, the dot to arrow conversion feature now handle auto pointer declarations better. The plugin now recognizes explicit pointer hints in auto variable declarations, providing more accurate code completion and conversion.
Example:


int myInt = 1;
int* myIntPtr = &myInt;

auto myAutoPtr = &myInt;      // Implicit pointer
auto* myExplicitAutoPtr = &myInt;  // Explicit pointer - now detected!

In the above example, “myAutoPtr” and “myExplicitAutoPtr” variables have their auto type both resolved to “int *”, but with the second one the fact that it should be a pointer is made explicit.

This enhancement makes the feature more reliable when working with modern C++ auto declarations, reducing coding errors and improving developer productivity.

Modernized Options Dialog Interface

The Visual Assist Options dialog has been completely rebuilt with a modern UI framework, moving away from the legacy Win32 interface theme. This modernization represents the first step in a comprehensive UI refresh that will extend to other Visual Assist components in future releases.

Visual Assist 2025.3 updates the look and feel of the options dialog.

Improved Ray Line Highlighting Style

One of VA’s ways to showcase the current active line is achieved by using the “ray lines” highlighting style. Ray lines provide a subtle, non-intrusive way to highlight the current line using minimal horizontal lines without left/right borders.

New improved ray line highlighting style.

This option has been refined with better vertical spacing, addressing user feedback about the previous tight layout.

If you prefer using a different highlighting style, you can choose from the available options in the options dialog (Thin Frame, Background Color and Ray Lines). To choose your preferred highlighting style, navigate to Extensions — VAssistX —Visual Assist Options — Editor — Highlighting —”Highlight current line with:” 

Enhanced Replace Auto With Exact Type Accessibility

Building on the popular Replace Auto With Exact Type feature in previous releases, Visual Assist now makes this functionality more accessible via the right click menu or automatically via typing  the auto keyword.

Use Quick Info menu or right click on Auto.

Bug Fixes

For bug fixes and general improvements, the most critical update is the restoration of shader syntax coloring support in Visual Studio 17.12.0 and newer versions, addressing multiple related issues with code formatting and syntax highlighting in shader files across VS 2019 and 2022.

Additionally, there are significant performance improvements for Unreal Engine projects, specifically enhanced responsiveness of quick actions and refactoring menus. The release also includes fixes for HLSL file formatting and improved navigation performance for MAUI base classes.

The following list summarizes the most important bugs addressed in this release:

  • Fix for code formatting not working in shader files in VS 2019+
  • Fix for syntax coloring not working in shader files in VS 2022
  • Restored shader syntax coloring support in Visual Studio 17.12.0 and newer
  • Improved responsiveness of quick actions and refactoring menu in Unreal Engine projects
  • Fixed inconsistent filter control display in initial Find References results
  • Improved performance when navigating from MAUI base classes using Go To Related
  • Resolved formatting issues in HLSL files when shader support is enabled in Visual Studio 2019 and 2022

Availability & Feedback

This release was made generally available on June 30th and can be downloaded via the downloads page. As always, we appreciate feedback, especially on recently introduced features and the UI changes we introduced. 

Update now to an active license to utilize all the features and fixes in this release. And if you have any questions or encounter any issues, feel free to reach out to support@ewholetomato.com.

The post Visual Assist 2025.3 release post first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/visual-assist-2025-3-release-post/feed/ 0 4262
How to get a job as a game developer in 2025 – Part 2: Insider advice from a studio game director https://www.wholetomato.com/blog/get-a-job-as-a-game-developer-skills-insider-advice/ https://www.wholetomato.com/blog/get-a-job-as-a-game-developer-skills-insider-advice/#respond Wed, 25 Jun 2025 08:14:01 +0000 https://www.wholetomato.com/blog/?p=4234 Last time, we shared some general tips about what skills and tools you need to get a job as a game developer in 2025. However, the game development industry is a dynamic and rapidly evolving...

The post How to get a job as a game developer in 2025 – Part 2: Insider advice from a studio game director first appeared on Tomato Soup.

]]>
Last time, we shared some general tips about what skills and tools you need to get a job as a game developer in 2025. However, the game development industry is a dynamic and rapidly evolving field. It’s characterized by technological advancements and continuous innovation, and as we find in this article, subjected to external factors such as financial and industrial pressures.

If you want an edge over the competition, it’s important to get timely and accurate information about what game studios and teams are looking for now. Bonus points if you can get advice from someone who’s doing the actual hiring!

If that’s what you’re looking for, then you’re in luck today. That’s exactly what we have in store for you today. We had a chat with a game director that has had multiple years in the industry and has been involved with a lot of hiring decisions.

About the interviewee

The Whole Tomato team had a chat with Julian Bock (called Jules by friends and colleagues), an expert figure in the game development industry with almost two decades of experience. Based in Germany, Bock is currently working as the managing director at NUKKLEAR. Until just this March, he was the Director for Product Development at PLAION, the game development and publishing company responsible for the recently released and highly-acclaimed Kingdom Come Deliverance II.

We asked for his insights and comments about what’s going on in the game development industry and how that has affected how they look for new team members for their projects. 

Current State of the Game Development Industry

Jog hunting in the game development industry in 2025 is highly competitive owing mostly to the post-pandemic slump and the proliferation of AI-assisted development. These two factors have slowed down demand while at the same time increasing individual efficiency, making the competition tight, especially for new developers breaking into the industry.

For context, the pandemic spurred unprecedented growth (13% rate of return from 2017–2021), but expansion tapered dramatically to around 1% from 2021–2023. Now, it is only projecting 5% through 2028.

And for aspiring game developers or fresh graduates, the state of the game industry is one of the major (albeit uncontrollable) factors that decide how difficult it is to score your first role or job in as a gamedev.

Bock explains that during the pandemic, “we had a lot more time for entertainment.” But when the lockdowns gradually eased up, the demand for games slowed down but the companies and businesses still had to realize the investments made during the pandemic boom.

“We came to the point that a lot of more money was invested into this obviously booming industry which led to projects being started, a lot of more publishers, and a lot more developers.”

But inevitably, the market stabilizes and investments slow down. So this forms a problem for new developers wherein there had been still a lot of games being published because companies still had the budget from pandemic investments, but at the same time, there were fewer and fewer players than before.

The Reality of Modern Game Development Teams

Game development is fundamentally a team sport that requires long-term commitment. Most major projects operate on 3-5 year development cycles, with studios relying heavily on a stable core team of experienced developers. “It’s very important, in my view, that you start with a very strong, reliable core team,” explains Bock, who has managed teams ranging from 5 to 200+ people across various projects, including the recently released Kingdom Come: Deliverance 2.

This doesn’t mean newcomers are locked out—quite the opposite. Studios need a healthy mix of senior, regular, and junior developers for both cost reasons and for maintaining a pipeline of talent. The key is understanding where you fit in this ecosystem and how to position yourself for growth.

What It Really Takes to Get Hired in 2025

Tip 1: Scout the team you’re trying to join and see what you bring to the table.

Companies will always look for the best fit in terms of team composition. Regardless of your current skill level and experience, Bock also emphasizes how important it is to understand the team dynamics and composition of a usual game development team.

Teams can’t just be full of seniors—that becomes extremely expensive fast. There will always be space for less experienced devs. As mentioned above, studios need a healthy mix of senior, regular, and junior developers for both cost reasons and to have access to a broader set of skills.

So how to stand out as a junior dev? Understand the company and show them your potential and willingness to learn. 

Bock advises: “For the young people reading this, it’s important to know if you are being hired in a company,  [understand that] as a young developer, you can develop yourself while learning on the job, learning from the seniors, and getting insight of the realities from leadership.” 

If you are able to research and scout the company and the team look for answers to these questions:

  • How big is the company I am joining? 
  • Is it more of an indie or a mid to large-sized company?
  • Do I know if the company is in the middle of producing a new game? 
  • Are they looking for any specific specialization? Or are they filling in general gaps in the workplace?
  • Do my skills fit the current project of the company? 
  • What do I know about the genre of games being developed?
  • Upon joining, what do I provide the company? Can this change if they train me?

Tip 2: Cultural Fit vs. Technical Skills

Another key consideration that Julian shared is to mind not only what skills you bring to the table, but also how you bring those to work. The output and pace of the project is dictated by the team building it. As someone who wants in on a project, you need to have certain skills and knowledge (or affinity for them), as well as a compatible mindset when joining the team.

Getting to keep a job is just as important as bagging it the first time around. A nail that sticks out gets hammered. Know your role, learn how to collaborate, and see how you can try out new things without slowing down your teammates in the process.

Ask yourself not only what the company can do for me, but also what can I do for the company—keeping it balanced, of course.

Tip 3: Flexibility is Your Greatest Asset

In connection with the last tip, as someone starting out in a new team, the most important trait for new developers isn’t necessarily technical prowess—it’s flexibility. “What is really important for young developers is being flexible,” Bock emphasizes. This means being willing to start at an appropriate level, prove your value and potential, and then negotiate your next step based on performance.

The games industry isn’t the highest-paying tech sector. With the same skillset and affinity for coding, developers who prioritize maximum compensation might find better opportunities in fintech or enterprise software, for instance. 

However, game development offers something unique: the opportunity to create experiences that generate genuine emotional responses in players. As Bock puts it, “You’re delivering an experience to the player… you’re delivering emotions.”

Tip 4: Do I specialize or do I generalize?

The eternal question of whether to specialize or develop broad skills depends heavily on the type of projects you want to work on. For small indie teams of 5-10 people, generalist skills are invaluable—you might need to handle everything from gameplay programming to UI design. However, larger AAA productions with teams of 100+ developers typically seek specialists: combat designers, vehicle systems programmers, or technical artists with specific expertise.

Most developers cannot afford to take long breaks as the average pay grade cannot sustain such long breaks. Devs don’t generally get any revenue share or royalties from the project they worked on too. After a release or at the start of a new game development cycle, devs have three main options:

  • Continue post-launch to produce patches, expansion content, DLCs, etc, 
  • Get reassigned to a new team to start/continue developing a new game
  • Jump ship and start looking for a new project altogether

The smart approach for newcomers is to develop a solid foundation across multiple disciplines while building deeper expertise in one area that genuinely interests you. This gives you the flexibility to contribute to smaller teams while positioning yourself for specialized roles as you gain experience.

Bonus Tip: The AI Imperative

Perhaps the most critical advice for 2025 and beyond centers on artificial intelligence. “If I would be like a young graduate today… I think it’s most important to enter the AI game with clarity and dedication.” Bock advises.

While many roles in game development will likely be impacted or replaced by AI in the coming years, those who can effectively work with AI tools will become indispensable. “Some see AI as a threat, some as a chance. Don’t resist, try to rule while using it!” 

This trend is already visible across the industry. Companies like Ubisoft are experimenting with AI-powered procedural generation tools, while indie developers are using AI for everything from concept art to dialogue writing. Rather than viewing AI as a threat, emerging developers should embrace it as a powerful multiplier for their creativity and productivity.

Our note: Regardless of your stance on the usage of AI in the workplace, we cannot deny its usefulness in multiple areas of game development. Thus, new developers need to adapt to the demands of those who are hiring or else they risk being overshadowed by their AI-using peers.

Green flag, red flags for job hunters

What Studios Are Looking For

Beyond technical skills, studios value developers who understand the broader context of game development. This means grasping the business realities—budgets, timelines, and market pressures—that influence creative decisions. The best junior developers don’t just ask “What can the company do for me?” but try to keep a healthy balance and also consider “What can I do for the company?” as well.

Cultural fit matters enormously, especially for core team positions. Game development is inherently collaborative, and toxic team members can derail projects that represent years of investment. Studios look for people who can handle criticism, adapt to changing requirements, and maintain positive relationships under pressure.

Building Your Foundation

While formal education can provide valuable structure and networking opportunities, the industry increasingly values demonstrable skills over degrees. A strong portfolio showcasing completed projects—even small ones—carries more weight than academic credentials alone. Contributing to open-source projects, participating in game jams, or creating mods for existing games can provide the practical experience that makes a resume stand out.

The rise of accessible development tools like Unity, Unreal Engine, and Godot means there are fewer barriers to entry than ever before. You can download professional-grade software and start building games immediately. What matters is the quality of what you create and your ability to discuss your design decisions intelligently.

Looking Forward: The Consolidation Opportunity

While the current industry contraction might seem discouraging, it also represents an opportunity. The market is moving toward “more quality product, less product,” as Bock predicts. This means that skilled developers who can contribute to polished, memorable experiences will be in high demand.

The key is positioning yourself for this future by developing skills that complement rather than compete with AI, building a network within the industry, and maintaining the flexibility to adapt as the landscape continues to evolve. Whether you’re interested in indie development, mobile games, or AAA productions, the fundamental principle remains the same: focus on creating great experiences for players, and the career opportunities will follow.

Game development remains one of the most rewarding creative fields for those willing to embrace its challenges. The industry needs fresh talent with new perspectives, and there’s never been a better time to start building the skills that will define the next generation of gaming experiences.

Get Ahead with Visual Assist for Unreal Engine work

If you are applying and looking for a job involving Unreal, using game-focused development tools like Visual Studio with Visual Assist can help you work smarter, navigate large codebases faster, and spend more time creating rather than troubleshooting. Download and try it now for free!

get a job as a game developer

The post How to get a job as a game developer in 2025 – Part 2: Insider advice from a studio game director first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/get-a-job-as-a-game-developer-skills-insider-advice/feed/ 0 4234
Where to next? A quick update about Visual Assist’s future from our GM https://www.wholetomato.com/blog/where-to-next-a-quick-update-about-visual-assists-future-from-our-gm/ https://www.wholetomato.com/blog/where-to-next-a-quick-update-about-visual-assists-future-from-our-gm/#respond Tue, 24 Jun 2025 15:57:45 +0000 https://www.wholetomato.com/blog/?p=4231 The best part of working in dev tools? Hearing directly from the people who use them. Over the past several months, we here at Whole Tomato have had the privilege of speaking with C++ professionals...

The post Where to next? A quick update about Visual Assist’s future from our GM first appeared on Tomato Soup.

]]>
The best part of working in dev tools? Hearing directly from the people who use them. Over the past several months, we here at Whole Tomato have had the privilege of speaking with C++ professionals across many industries: gaming, fintech, agtech, manufacturing, and beyond. 

From those conversations, one thing is clear: despite the rise of new alternatives and lively debates about its future, C++ is and will remain a cornerstone across industries for years to come. We’re proud that Visual Assist has served the MSVS/C++ community for decades, and we sincerely hope to serve you for decades more.

Of course, the landscape is shifting. The gaming industry is facing significant headwinds following massive post-Covid growth and investment. Companies are still navigating how to integrate formerly nascent AI products that are rapidly becoming a mainstay for many coders. And debates about C++ safety continue to heat up.

From my perspective, this is why it’s so important that we remain connected to the community we serve. As our tech progresses, old problems become obsolete, and new problems arise. I’m proud of the work our team has done – and continues to do – to first listen to our customers, and, secondly, act thoughtfully to build solutions that solve them.

One focus that will never fade for us is user experience. Developers consistently tell us they expect tools that are polished, seamless, and smart…tools that don’t fight for attention when not warranted or beg for a prompt, but instead subtly offer productivity boosts at the right time, without breaking concentration. 

That’s why our roadmap for the coming months specifically focuses on honing VA’s existing capabilities to deliver even more productivity. This starts with a modernization of our UI (stay tuned for our next update), better surfacing of and eventually something we’re very excited about: subtle, behind-the-scenes Ai features. In other words, not another bolted-on AI chatbot, but  “under the hood” integrations that make the things you love about VA even better – something that over 70% of you responded favorably to in our recent community survey.

For example, can our parser coupled with AI quickly generate accurate unit tests? Can we leverage AI to identify memory safety issues, then use VA’s refactoring to address them? How can AI improve our renames, inspections, and navigation? These are the questions we’re asking. Of course it’s early, and there’s much work to be done, but we’re excited about the possibilities. And you should be too!

By the way, do you have thoughts? Send me a note – I’d love to hear what’s on your mind.

Ben Schwenk,
Whole Tomato general manager

The post Where to next? A quick update about Visual Assist’s future from our GM first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/where-to-next-a-quick-update-about-visual-assists-future-from-our-gm/feed/ 0 4231
C++ Pattern Matching: Should C++ Embrace Functional Programming Constructs? https://www.wholetomato.com/blog/c-pattern-matching-should-c-embrace-functional-programming-constructs/ https://www.wholetomato.com/blog/c-pattern-matching-should-c-embrace-functional-programming-constructs/#respond Wed, 18 Jun 2025 12:00:48 +0000 https://www.wholetomato.com/blog/?p=4224 Functional programming is influencing everything—even C++. Pattern matching is a clean and expressive way to check a value against a given structure or pattern. Pattern matching provides developers with a compact way to define their...

The post C++ Pattern Matching: Should C++ Embrace Functional Programming Constructs? first appeared on Tomato Soup.

]]>
Functional programming is influencing everything—even C++.

Pattern matching is a clean and expressive way to check a value against a given structure or pattern. Pattern matching provides developers with a compact way to define their search criteria and specify actions for successful matches. Pattern matching unifies conditionals, destructuring, and type checks into a single, expressive construct.

Functional programming started as an academic concept for niche languages, yet it has successfully entered the general programming landscape. The adoption of functional programming concepts has spread across all programming paradigms, including Java, JavaScript, and C++. 

Pattern matching, once a hallmark of purely functional languages, has now become one of the most widely adopted features, demonstrating just how far functional programming ideas have permeated mainstream languages like C++.

The C++ programming language provides developers with three methods to achieve pattern matching through std::variant, std::visit, and Mach7 as a third-party library. The current approaches for pattern matching in C++ may produce verbose code that lacks consistency and intuitive understanding when compared to native pattern-matching languages.

So here’s the real question: Should C++ fully embrace functional programming constructs like built-in pattern matching? Or should it stay true to its performance-first roots and resist this shift?

Let’s dig in.

What is pattern matching?

Pattern matching is a programming construct that allows you to test data against a set of conditions—or patterns—and execute code based on which pattern fits. It’s widely used in functional programming languages like Haskell, Rust, Scala, and even modern JavaScript (via destructuring).

At a glance, pattern matching goes beyond traditional if-else or switch statements. It lets you match not just values, but also types, structures, and shapes of data—and automatically extract components as part of the process.

For example, in Rust:

enum Shape {

    Circle(f64),

    Square(f64),

}

fn area(shape: Shape) -> f64 {

    match shape {

        Shape::Circle(r) => 3.14 * r * r,

        Shape::Square(s) => s * s,

    }

}

This kind of code is declarative and concise. You’re not manually checking conditions—you’re saying “if it looks like this, do that.”

When to use pattern matching: think about your code in terms of “if the data structure is like this, then I want it to do this”

Key advantages of pattern matching:

  • Readability: It’s easier to understand intent when logic follows natural data structures.
  • Concise Code: Reduces boilerplate, especially for condition-heavy logic.
  • Type Safety: Many functional languages perform exhaustive checks to ensure all possible patterns are handled, helping prevent runtime errors.
  • Built-in Destructuring: You can directly extract data from complex structures in the match itself.

In short, pattern matching is about writing cleaner, safer, and more expressive code. The question is—how well does this fit into C++’s existing model?

Let’s see what C++ currently offers.

Pattern matching in C++ today

C++ doesn’t yet have built-in pattern matching in the same way languages like Rust or Haskell do, but developers have long used alternatives to achieve similar results.

Traditional approaches

Historically, C++ developers have relied on combinations of switch, if/else, and polymorphism via virtual functions to manage conditional logic.

Example using switch:

int x = 2;

switch (x) {

    case 1:

        std::cout << “One\n”;

        break;

    case 2:

        std::cout << “Two\n”;

        break;

    default:

        std::cout << “Other\n”;

}

While effective for simple values, this method doesn’t scale well to complex or variant-based data types.

Modern alternatives in C++

1. std::variant + std::visit (C++17)

With the introduction of std::variant, C++ gained a way to store one value out of a fixed set of types, similar to Rust’s enum. Pattern-like behavior can be achieved using std::visit.

Example:

#include <variant>

#include <iostream>

std::variant<int, double> data = 3.14;

std::visit([](auto&& value) {

    std::cout << “Value: ” << value << “\n”;

}, data);

This approach allows for type-safe handling of different alternatives, though the syntax can be verbose for complex use cases.

2. if constexpr (C++17)

if constexpr enables compile-time conditional branching, allowing decisions based on template parameters. It’s especially useful in generic code.

Example:

template<typename T>

void printType(T value) {

    if constexpr (std::is_integral<T>::value) {

        std::cout << “Integral: ” << value << “\n”;

    } else if constexpr (std::is_floating_point<T>::value) {

        std::cout << “Floating point: ” << value << “\n”;

    } else {

        std::cout << “Other type\n”;

    }

}

This is a form of type pattern matching evaluated at compile time.

3. Structured Bindings (C++17)

Structured bindings allow destructuring complex objects into individual components, somewhat like matching the shape of data.

Example:

std::pair<int, int> point = {3, 4};

auto [x, y] = point;

std::cout << “X: ” << x << “, Y: ” << y << “\n”;

While not pattern matching in itself, this feature improves readability and integrates well with manual matching logic.

Although C++ doesn’t yet support native pattern matching, developers can simulate it using tools from modern C++. These approaches are powerful, but not always elegant, which raises the question: Should C++ introduce native support?

Let’s look at how third-party libraries and upcoming proposals are taking things further.

C++ libraries that support pattern matching

While native pattern matching is still under consideration for future versions of C++, several third-party libraries and proposals already offer powerful ways to simulate or implement it today.

1. Mach7 (Multiple-Dispatch Pattern Matching for C++)

Mach7 is a lightweight library that brings functional-style pattern matching to C++. It supports value patterns, type patterns, guard conditions, and even open patterns, mimicking constructs from functional languages like Haskell or OCaml.

Example:

#include <iostream>

#include “match.hpp”  // Mach7 header

using namespace mch;

struct Shape { virtual ~Shape() = default; };

struct Circle : Shape { double r; };

struct Square : Shape { double s; };

void describe(Shape* shape) {

    Match(shape)

    {

        Case(Circle* c) std::cout << “Circle with radius ” << c->r << “\n”;

        Case(Square* s) std::cout << “Square with side ” << s->s << “\n”;

        Otherwise(std::cout << “Unknown shape\n”);

    }

    EndMatch

}

This syntax provides a clear, declarative way to handle multiple types, much like match statements in Rust or Scala.

2. C++ Pattern matching TS (Technical Specification)

C++ standardization efforts have proposed several enhancements to introduce pattern matching directly into the language. One such proposal is P1371R3, which introduces a match expression similar to those found in functional languages.

While still in the proposal phase, it’s a sign that the C++ community is actively exploring native support. If accepted, future C++ versions (possibly C++26) could feature syntax like:

match (value) {

    case 0:    std::cout << “Zero\n”; break;

    case 1:    std::cout << “One\n”; break;

    case auto x if (x > 1): std::cout << “Greater than one\n”; break;

}

3. Boost.Hana

Boost.Hana is a metaprogramming library for compile-time computations using modern C++ features. While not a pattern-matching library per se, it enables matching patterns at compile time, making it useful for highly generic or constexpr-heavy designs.

Example (simplified):

#include <boost/hana.hpp>

namespace hana = boost::hana;

auto result = hana::if_(hana::bool_c<true>,

    []{ return “matched true”; },

    []{ return “matched false”; }

);

std::cout << result() << “\n”; // matched true

It’s more complex but powerful for metaprogramming scenarios where performance and type safety are critical.

These tools bring C++ closer to the expressive power of functional languages, sometimes at the cost of verbosity or complexity. Up next, we’ll weigh the pros and cons of fully embracing functional constructs like these in the core language itself.

The case for embracing functional constructs

Modern C++ has come a long way from its purely procedural roots. As software complexity grows, there’s a strong case to be made for adopting more functional programming constructs, with pattern matching leading the charge.

Pros of integrating pattern matching in C++

Increased expressiveness

Pattern matching lets developers express logic more clearly and concisely. Instead of juggling multiple if-else or switch statements, you can represent intent directly, making your code more intuitive and aligned with how we reason about data.

Safer and more declarative code

In languages like Rust or Haskell, pattern matching often forces you to handle all possible cases—a feature known as exhaustiveness checking. This reduces the chances of missing edge cases or encountering runtime errors due to unhandled types or values.

Alignment with modern language trends

Languages such as Rust, Scala, Kotlin, and even newer versions of Java and JavaScript are embracing pattern matching to simplify complex branching logic. For C++ to remain competitive and developer-friendly, adopting similar paradigms is a natural evolution.

How pattern matching benefits real C++ workflows

Pattern matching isn’t just about syntax sugar—it has real utility in domains where C++ already shines:

  • Abstract Syntax Trees (ASTs): In compilers or interpreters, pattern matching makes it easier to traverse and manipulate tree-like structures based on node types.
  • Rule Engines: Defining and applying transformation or validation rules becomes clearer when conditions can be matched declaratively.
  • Embedded Systems and Finite State Machines: Handling states and transitions using pattern-based constructs can reduce bugs and improve maintainability.

In all these cases, integrating pattern matching results in less boilerplate, cleaner control flow, and more robust logic, without sacrificing C++’s performance edge.

The case against (or cautions)

While pattern matching brings many benefits, it’s important to consider the potential trade-offs, especially in a language like C++ that has always favored performance, control, and minimalism.

C++ philosophy: performance over abstraction

C++ was designed with zero-cost abstractions in mind. Features are only added when they offer clear advantages without incurring runtime overhead. Some argue that pattern matching—especially if misused—could introduce abstraction layers that compromise performance or transparency.

Compilation overhead and learning curve

Adding a native match construct could make compile times longer and error messages harder to decipher, especially when templates, lambdas, and concepts are involved. It also increases the cognitive load for newcomers, who already face a steep learning curve with C++’s complex syntax and paradigms.

Risk of bloated language design

C++ already suffers from being “feature-rich to a fault.” Critics worry that pattern matching, while elegant in theory, might become another overloaded mechanism that interacts unpredictably with templates, operator overloading, and legacy code.

Existing features already offer workarounds

Modern C++ offers several ways to simulate pattern matching, such as std::variant, std::visit, if constexpr, and structured bindings. While not perfect, these tools give developers flexibility without needing entirely new syntax or semantics.

In summary, while pattern matching could elevate C++ expressiveness, it also raises valid concerns about complexity, compilation cost, and philosophical fit. The real challenge is finding a balance between innovation and the core identity of the language.

Let’s now explore where things might be heading in the future.

What’s coming in the future? (C++23/C++26)

The future of pattern matching in C++ lies in the hands of ongoing proposals and community discussions, many of which aim to bring functional-style constructs to the language without sacrificing C++’s core principles.

Proposals in progress

One of the most notable efforts is P1371R3, which proposes a native match statement—conceptually similar to pattern matching in Rust or Scala. This construct would allow developers to match on values, types, and even conditions within a clean, expressive syntax.

The goal is to make pattern matching:

  • Type-safe and exhaustive
  • Compatible with C++’s type system
  • Composable with other modern C++ features like std::variant, structured bindings, and concepts

Other discussions also explore integrating guards, binding patterns, and OR patterns, inspired by the rich semantics found in functional languages.

Current status and community debate

As of now:

  • Pattern matching was not included in C++23.
  • It’s being actively explored for C++26, but nothing is finalized.
  • Some developers are excited about the direction, citing improved expressiveness and readability.
  • Others are cautious, warning about language bloat, increased compiler complexity, and overlap with existing features like if constexpr and std::visit.

The C++ standards committee continues to gather feedback, refine syntax models, and weigh the impact of such a feature on both new and existing codebases.

In short, pattern matching is on C++’s radar, and the road ahead looks promising—but cautious. Whether it makes it into the official standard depends on community consensus and the committee’s vision for the language’s evolution.

Final verdict: Should C++ embrace it?

Pattern matching in C++ is no longer just a theoretical discussion—it’s a real, evolving conversation within the community. Let’s briefly revisit the key points on both sides.

The pros:

  • Enhances expressiveness and readability of complex branching logic.
  • Enables safer, declarative code, particularly with exhaustiveness checking.
  • Aligns C++ with modern language trends, helping attract newer generations of developers.

The cons:

  • Introduces potential language bloat and adds another abstraction layer.
  • Increases compiler complexity and learning curve for beginners.
  • Current features like std::variant, if constexpr, and libraries like Mach7 already offer viable workarounds.

So, should C++ embrace pattern matching?

The answer depends on your perspective:

  • If you prioritize expressiveness, maintainability, and modern design, then native pattern matching would be a welcome step forward.
  • If you value minimalism, raw performance, and avoiding abstraction, the existing tools may already meet your needs.

In either case, the growing support for pattern matching, through libraries and proposals, signals a shift in how C++ is evolving. Whether adopted in C++26 or later, the feature will likely continue to shape discussions around language design and developer ergonomics.

What do you think?

Do you want to see native pattern matching in C++? Or do you prefer the language to stay lean and low-level?

Share your thoughts in the comments. Let’s keep the discussion going!

The post C++ Pattern Matching: Should C++ Embrace Functional Programming Constructs? first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/c-pattern-matching-should-c-embrace-functional-programming-constructs/feed/ 0 4224
How to get a job as a game developer in 2025 – Part 1: Skills, Tools & Job Tips https://www.wholetomato.com/blog/how-to-get-a-job-as-a-game-developer-in-2025-part-1-skills-tools-job-tips/ https://www.wholetomato.com/blog/how-to-get-a-job-as-a-game-developer-in-2025-part-1-skills-tools-job-tips/#respond Tue, 27 May 2025 12:00:48 +0000 https://www.wholetomato.com/blog/?p=4195 For many, game development isn’t just a career—it’s a dream job. In 2025, that dream still holds strong, even as the industry navigates shifting trends. While the gaming market remains massive, it’s important to note...

The post How to get a job as a game developer in 2025 – Part 1: Skills, Tools & Job Tips first appeared on Tomato Soup.

]]>
For many, game development isn’t just a career—it’s a dream job. In 2025, that dream still holds strong, even as the industry navigates shifting trends. While the gaming market remains massive, it’s important to note that it has experienced some contraction since the pandemic boom. Studios have become more selective, and competition has intensified.

Still, the global gaming industry continues to evolve across PC, console, and mobile platforms, driven by billions of players who demand fresh experiences and innovation. According to recent data, there are still approximately 3.32 billion active video game players worldwide, underscoring the scale and ongoing opportunity for gaming content.

But with opportunity comes competition. Thousands of enthusiastic candidates pursue game development positions, including computer science graduates, self-taught coders, modders, and indie creators. To stand out among numerous applicants, you need both passion and strong technical abilities, along with practical experience and production-ready tools that enhance your speed and reliability.

So, how do you actually become a game developer in 2025? Let’s break it down.

Which part of game development do you want to be in?

Before you become a game developer, one of the biggest pitfalls that aspiring game developers fall into is thinking that their main output is a game. It’s not just about coding, and out pops a fleshed-out game. 

Choosing your focus area

Choose your focus area to become a game developer

While programming is a critical component, responsible for bringing the game mechanics and logic to life, it is just one piece of a larger puzzle. Game development is inherently a collaborative process that requires the integration of various disciplines. 

  • Creative designers play a pivotal role in conceptualizing the game’s visual style and narrative, ensuring that the game is not only functional but also engaging and immersive. Their work lays the foundation for the game’s aesthetic and storytelling elements. 
  • On the other hand, there are sound engineers creating the auditory experience of a game.
  • Level designers, on the other hand, focus on creating the game’s environment and challenges, ensuring that each level is both enjoyable and appropriately challenging for players.
  • Furthermore, marketers are vital for promoting the game and reaching the target audience, employing strategies to generate buzz and drive sales.
  • Project/Product Managers are strong at planning and coordination. These roles ensure teams stay on schedule and aligned with product goals.

These roles require a deep understanding of player psychology and game mechanics to create a balanced and rewarding experience. 

Know the studio type that suits you

The composition of a game development team can vary significantly depending on the size and scope of the project. Indie games, for example, might be developed by a small team or even a single individual who wears multiple hats, handling everything from coding to marketing. 

In contrast, large-scale AAA games often involve hundreds of specialists, each focusing on a specific aspect of the game. Regardless of the project’s size, game development is inherently a multidisciplinary field that requires collaboration and coordination among various experts. 

Game studios come in all shapes and sizes, and your ideal work environment might depend on the type of experience you’re looking for:

  • Indie studios / solo developers
    Known for innovation and experimentation, indie projects often explore new genres or mechanics. You’ll likely wear multiple hats and have lots of creative input.
  • Mid-sized studios
    These teams aim for polished, full-length games. They balance structure with flexibility, offering both creative opportunities and some stability.
  • Large/enterprise studios
    AAA studios work on blockbuster franchises with massive teams and budgets. Expect specialization, higher expectations, and strict production pipelines—but also a huge audience and strong career growth potential.

Knowing where you see yourself, both in terms of role and studio type, can guide your portfolio, networking, and even the tools you choose to master. At its core, being a game developer means being part of a diverse team working towards a common goal: creating an engaging and memorable gaming experience. 

For the purposes of this blog, we’ll focus on the game developer, as in the programmer. But before you try to be a game developer, try to know exactly what role you are applying for.

Key skills you need to break in

The first step to becoming a game developer in 2025 requires you to learn both technical and creative skills. The gaming industry seeks developers who possess programming abilities and teamwork competencies as well as problem-solving skills and contributions toward game quality enhancement.

Key Skills You Need to Become a Game Developer - Infographic

Programming languages

C++ holds its position as the top programming language, particularly suited for Unreal Engine development. C++ provides programmers with direct access to system resources to achieve top performance in gameplay and graphics operations. On the other hand, C# serves as the preferred language for Unity game development projects. Your career opportunities will expand significantly when you learn either one or both of these programming languages.

Also read: C++ versus Blueprints: Which should I use for Unreal Engine game development?

Problem-solving and core concepts

To make a seamless gameplay experience, developers rely on creative problem-solving to address complicated issues. The skills that game studios consider most valuable include:

  • Applied math and physics
  • 3D vector math
  • Trigonometry and transformation matrices
  • Game loops, object-oriented design, and memory management

A basic understanding of these fundamental concepts enables developers to create efficient bug-free code without needing a PhD degree.

Version control and debugging

The modern studio environment requires developers to work together using Git as their version control system. The ability to establish new branches and handle version history alongside merge conflicts represents essential skills for you to master. The ability to locate crashes and optimize FPS while handling memory leaks makes you stand out from other developers.

Performance optimization

Games need to function flawlessly on various devices. Thus, optimization forms an integral part of game development work. Performance thinking must be integrated into your workflow from the beginning to optimize gameplay logic for reduced latency and minimize CPU overhead and draw calls.

How to gain experience before you’re hired

Getting into the gaming industry requires more than knowledge because it demands actual work you have accomplished. The review process at studios tends to favor applicants who demonstrate finished projects and hands-on experience instead of focusing on academic degrees or certifications. Several effective methods exist to gain experience before securing your first job.

Personal or indie projects

Begin your game development journey by making your own games regardless of their scale. A basic platformer, alongside a puzzle game and a first-person demo project, will demonstrate your capability to transform concepts into playable games. 

WATCH: Whole Tomato lead engineer Chris Gardner shares his game development story.

Choose the well-known game engines Unreal or Unity to create your project while prioritizing performance, playability, and visual quality. The projects you complete prove your drive, along with your creative thinking and technical expertise, which studios highly appreciate.

Join game jams

Illustration of four game developers collaborating at a table with laptops during a game jam, under a bold "Join Game Jams" heading, with icons representing ideas and systems above them.

The combination of time constraints and team collaboration during Ludum Dare and Global Game Jam events provides developers with the chance to create projects while exploring innovative concepts. The events help developers improve their skills while teaching them to set realistic project scopes and meet deadlines, which are essential professional competencies.

Contribute to open-source projects or plugins

Your participation in open-source projects, specifically within Unreal Engine, demonstrates your practical development skills to potential employers. Your contributions to open-source projects become publicly accessible proof of your work, which you can include in your portfolio. Through open-source contributions, you will establish connections with experienced developers who can serve as mentors.

Internships or modding communities

Professional workflows, together with code reviews and team communication, become accessible through unpaid internships. The path to success includes joining modding communities when internships remain unavailable. The Unreal Slackers discord group is a great place to hang out with other developers, and also a platform to find and offer project-based game development work.

The path to professional game development often begins with creating mods for games such as Skyrim, Minecraft, and Half-Life. Working on mods helps developers understand how to operate within existing codebases and engine frameworks, which mirrors real studio environments.

You don’t need to have all the answers right away, but knowing your preferences helps you build the right skills and find teams where you can thrive.

Stand out in the application process

When you’re applying for your first job in game development, your application materials are your front line. Recruiters and hiring managers often skim dozens—if not hundreds—of applications, so your goal is to make yours instantly relevant, clear, and compelling.

Tailor your resume for game studios

Avoid sending out a one-size-fits-all resume. Instead, customize it for each studio and role. Focus on:

  • Relevant projects – Whether personal, academic, or from a game jam, list games where you made a meaningful contribution.
  • Shipped titles or demos – Even a polished prototype shows initiative and execution.
  • Your role and impact – Be specific about what you did: gameplay programming, level design, bug fixing, performance tuning, etc.

Show results where possible—FPS improvements, memory savings, or even user ratings if your game is public.

Build a portfolio that shows, not tells

A strong online portfolio can set you apart immediately. Create a simple website or use platforms like GitHub Pages or Notion to host:

  • GitHub repositories of your code
  • Playable web or downloadable demos
  • Screenshots and short videos
  • Clear write-ups explaining your role, tools used, and development challenges

This gives studios instant proof of your abilities and thinking process.

Highlight familiarity with the chosen game engine specialization 

If you’re targeting Unreal Engine studios (especially those using C++), showcasing your experience with Unreal Engine + Visual Studio is a major plus.

Mention specific practices:

  • Using Blueprints alongside native C++
  • Efficient navigation and refactoring using Visual Assist
  • Debugging Unreal projects in Visual Studio
  • Project packaging and performance testing

Even better—include a brief walkthrough of your development setup in your portfolio.

Be ready for technical interviews

Many studios will assess your problem-solving through whiteboard exercises or take-home coding challenges. Common topics include:

Practicing these in advance—especially in the context of games—can give you a big edge over other candidates. 

Where to find jobs in 2025

After developing your skills and portfolio, you need to identify the right job search locations. The game industry in 2025 is more globally connected and remote-friendly than ever, opening doors for developers of all backgrounds and locations.

To give you an idea of their preferences, triple A studios will tend to find experienced developers or those with a proven background because the projects that they make must succeed. On the other hand, indie development studios who cannot compete in the size of projects, are more focused on the execution of a novel game mechanic or the creation of a new genre.

Apply to major studios

Major studios continuously seek to recruit new talent. The intense competition exists, but these studios provide well-organized development paths together with guidance and opportunities to work on globally recognized intellectual properties.

Some top employers to watch:

  • Epic Games – The creators of Unreal Engine and Fortnite
  • Ubisoft – Known for open-world franchises like Assassin’s Creed and Far Cry
  • Activision – Publishers of Call of Duty and other major titles
  • CD Projekt – Developers of The Witcher and Cyberpunk 2077

Monitor their careers pages through job alerts to maintain your competitive edge.

Explore indie studios and remote startups

The number of independent studios increased in 2025, while they welcome candidates who work remotely. These teams prioritize creative thinking along with flexible work approaches and employees who demonstrate versatility in their roles.

Remote startups and small studios give you the chance to:

  • Work on fresh, experimental ideas
  • Have more ownership over gameplay features
  • Get noticed faster within the team

Try reaching out directly on social platforms like X (formerly Twitter) or Discord, where indie devs often post hiring calls.

Use specialized job boards

Traditional job platforms are still useful, but specialized game development boards will save you time and show you more relevant roles.

Recommended boards:

  • Hitmarker – Covers game dev, esports, marketing, QA, and more
  • GameJobs.co – Features listings from both indie and AAA studios
  • LinkedIn – Still great for networking and spotting hidden opportunities via connections

Create a standout LinkedIn profile, follow recruiters and dev studios, and join relevant groups to stay visible.

Join the Unreal Engine community

If you’re working with Unreal Engine, you’ll find opportunities inside the ecosystem itself.

These platforms aren’t just for job hunting—they’re also where developers collaborate, showcase projects, and find freelance gigs.

Final tip: Be relentless, but smart

Breaking into the game industry isn’t easy, but it’s absolutely possible. The competition is real, the learning curve is steep, and rejection is part of the journey. But the people who make it? They’re the ones who keep going—refining their skills, shipping projects, and showing up in communities.

That said, persistence alone isn’t enough. You need to be strategic. Focus on building projects that showcase your strengths. Connect with other developers, join game jams, contribute to forums, and ask for feedback. Every interaction and every line of code gets you one step closer.

And don’t underestimate the value of the tools you choose. Using game-focused development tools like Visual Studio with Visual Assist can help you work smarter, navigate large codebases faster, and spend more time creating rather than troubleshooting.

Keep learning. Keep building. Keep playing. And remember: every great developer started somewhere—usually with a small project, a lot of curiosity, and the determination to level up.

Visual Assist - Visual Studio - Unreal Engine Game Development CTA Banner to download Visual Assist

The post How to get a job as a game developer in 2025 – Part 1: Skills, Tools & Job Tips first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/how-to-get-a-job-as-a-game-developer-in-2025-part-1-skills-tools-job-tips/feed/ 0 4195
Struggling with Visual Studio Performance? Visual Assist Has the Fix https://www.wholetomato.com/blog/visual-studio-performance-fix-with-visual-assist/ https://www.wholetomato.com/blog/visual-studio-performance-fix-with-visual-assist/#respond Thu, 08 May 2025 12:00:15 +0000 https://www.wholetomato.com/blog/?p=4170 If you’re a developer working in Visual Studio, chances are you’ve hit a few bumps in the road—slow load times, clunky navigation, unreliable IntelliSense, and the occasional “Where did that file go?” moment. These common...

The post Struggling with Visual Studio Performance? Visual Assist Has the Fix first appeared on Tomato Soup.

]]>
If you’re a developer working in Visual Studio, chances are you’ve hit a few bumps in the road—slow load times, clunky navigation, unreliable IntelliSense, and the occasional “Where did that file go?” moment. These common pain points can quickly add up, dragging down your productivity and turning routine coding tasks into frustrating time sinks.

Many developers accept these issues as just part of the job. But what if they didn’t have to be?

Visual Assist, a powerful productivity extension for Visual Studio, was built to solve the exact problems that slow developers down, without changing your entire workflow. In fact, you might already be struggling with features that have smarter, faster alternatives within reach.

In this article, we’ll explore how Visual Assist can help you improve Visual Studio performance, uncover better ways to navigate large projects, and fix annoying quirks like IntelliSense not working, especially if you’re working with Unreal Engine or C++. Whether you’re dealing with slow Visual Studio response times or you’re simply unaware of better options, this guide will show you how to reclaim your flow and speed things up.

Let’s take a closer look at the Visual Assist features that can fix what’s slowing you down.

Common Visual Studio pain points (and how Visual Assist fixes them)

In this section, we’ll explore common Visual Studio performance issues that most developers face—and how Visual Assist provides effective solutions.

Problem #1: Clunky file navigation in large projects

The problem
Working with large codebases in Visual Studio often means dealing with hundreds—or even thousands—of files spread across multiple folders. While Visual Studio’s native file explorer gets the job done, it can feel painfully sluggish when navigating complex projects. Endless scrolling and limited filtering options disrupt your focus and waste precious time.

The fix
Visual Assist’s Open File in Solution feature offers a faster, smarter alternative. Designed for performance, it allows you to locate any file instantly, even in massive solutions, using just a few keystrokes. The built-in filtering engine narrows down your results as you type, letting you jump to exactly what you need without wading through the entire project tree.

Visual Assist offers a suite of powerful navigation tools specifically designed for large solutions. These tools let you jump between files, symbols, methods, and related code with incredible speed and accuracy:

  • Open File in Solution: Quickly find and open any file with just a few keystrokes. It supports filtering, wildcards, and even fuzzy search. Explore more about Open File in Solution.
Accessing the Open File in Solution feature via the VAssistX menu in Visual Studio

Accessing the Open File in Solution feature via the VAssistX menu in Visual Studio

 

Visual Assist – Open File in Solution Example

  • Find Symbol in Solution: Search for any class, method, or variable—even if you only remember part of the name. We will discuss more about this feature in the next section.
  • Goto Related: Instantly jump between related files, like header/implementation pairs or base/derived classes. Read more on the Goto Related feature.
Go to Related feature in Visual Assist

Go to Related feature in Visual Assist

 

Go to Members of the Class User

Go to Members of the Class User

 

Members of the Class User

Members of the Class User

 

  • List Methods in Current File: Navigate large files by jumping to any method or function in a dropdown list.

These features eliminate the need to scroll endlessly or manually search through your folder structure. Whether you’re working in C++, C#, or Unreal Engine code, Visual Assist helps you move through your project like a pro.

Bonus tip
Want to locate a file or symbol without knowing the exact name? Just use an asterisk * in your search. For example, typing *Manager in Open File in Solution or Find Symbol will return results like UserManager, AccountManager, and more. Fuzzy search makes finding things faster—even when your memory isn’t perfect.

Problem #2: Can’t recall the exact name of a symbol

The Problem
You’re in the zone, deep into a feature or bug fix, and you need to find a class, method, or file—but you can’t remember the exact name. Visual Studio’s default search isn’t forgiving. If your input isn’t precise, you’re met with zero results or a long list of unrelated suggestions, forcing you to waste time browsing through files manually.

The Fix
Visual Assist makes this easier with fuzzy search built into tools like Open File in Solution and Find Symbol. These features allow you to search using partial names or approximate guesses. Can’t remember if it was UserManager or AccountManager? Just type *manager, and Visual Assist will surface relevant results instantly—even if your memory is fuzzy.

Using the Find Symbol feature in Visual Assist to locate symbols quickly

Using the Find Symbol feature in Visual Assist to locate symbols quickly

 

Visual Assist Find Symbol example

Visual Assist Find Symbol example

Bonus Tip
Combine fuzzy search with filters to narrow down by file type, scope, or symbol kind. Want even more control? Use negative filters by adding -word to your search. For example, *Manager -Account shows all items with “Manager” but excludes any that include “Account”. It’s one of the fastest ways to find exactly what you need, especially in large or unfamiliar codebases.

Problem #3: Unreal Engine source code shows incorrect red squiggles

The Problem
If you’re developing with Unreal Engine in Visual Studio, you’ve probably run into frustrating red squiggles under perfectly valid code. This usually isn’t your fault—it’s IntelliSense struggling to interpret Unreal Engine’s complex macro system. These false errors clutter your editor, create confusion, and slow down your workflow.

The Fix
Visual Assist comes with dedicated Unreal Engine support that understands UE’s syntax, reflection macros, and naming conventions far better than default IntelliSense. It correctly parses Unreal code, eliminating misleading squiggles and giving you accurate suggestions. In fact, many developers choose to disable IntelliSense entirely and rely solely on Visual Assist for parsing, symbol lookup, and navigation, resulting in cleaner code views and faster performance.

Bonus Tip
You can disable IntelliSense in Visual Studio’s settings and still enjoy full code completion, navigation, and error-free parsing through Visual Assist—especially helpful when working on large UE4 or UE5 projects.

Visual Assist for Unreal Engine

Visual Assist for Unreal Engine

Problem #4: Visual Studio lags when typing or scrolling

The Problem

When working on extensive projects, many developers experience lag in Visual Studio, particularly while typing or scrolling. This slowdown is often reported when IntelliSense is enabled, especially in large or complex codebases. Developers have observed that background parsing and real-time suggestions can affect responsiveness and break focus. In Unreal Engine projects, for example, IntelliSense may even become unresponsive, prompting many to disable it in favor of more reliable alternatives like Visual Assist.

The Fix

Visual Assist is built for speed. The parsing engine of Visual Assist operates more efficiently than IntelliSense, particularly when working with large or complex projects. The combination of disabling IntelliSense with Visual Assist’s code suggestions, navigation tools, and context-aware features will eliminate performance delays, allowing you to continue coding without interruptions.

Bonus Tip

The performance of developers improves right away when they disable IntelliSense completely and let Visual Assist handle code completion, reference finding, and symbol navigation tasks.

How to Enable Visual Assist’s Code Suggestions

Open Visual Studio.

Go to the Extensions menu ? VAssistX ? Visual Assist Options.

Open Visual Assist Options

In the Visual Assist Options window, navigate to Suggestions.

Visual Assist Options Window

Here you can enable the required options.

Click OK to apply the settings.

Optional: Disable IntelliSense (for best performance)

To rely only on Visual Assist and reduce lag:

  • Go to Tools ? Options ? Text Editor ? C/C++ ? Advanced

Set Disable IntelliSense to True

How to disable IntelliSense in Visual Studio Options window

This allows Visual Assist to fully handle code completion, navigation, and suggestions, resulting in a smoother experience, especially in large projects or when working with Unreal Engine.

Problem #5: Limited refactoring tools in Visual Studio

The Problem
While Visual Studio offers some built-in refactoring options, they often fall short, especially in complex C++ projects. Refactors like renaming symbols or introducing variables can be inconsistent, incomplete, or prone to errors depending on the context. This makes developers hesitant to trust these tools, slowing down their workflow.

The Fix
Visual Assist provides a robust and reliable set of refactoring tools designed with real-world C++ usage in mind. You get smart options like Rename, Encapsulate Field, Introduce Variable, Change Signature, and Create from Usage, all backed by deeper code understanding. These tools work more consistently and accurately across different project types and coding styles, helping you restructure code confidently and without breaking anything.

Bonus Tip

Visual Assist’s refactoring tools are not only more consistent—they’re also smarter. For example, they understand Unreal Engine macros like UFUNCTION and UPROPERTY, allowing you to safely rename or refactor even macro-decorated code that typically breaks under standard IntelliSense-based tools.

Create from Usage – Smart refactoring made easy

Try the Create from Usage feature when writing new code—it lets you generate declarations and implementations on the fly by referencing them before they exist. It’s a fast and intuitive way to build out logic without breaking your coding rhythm.

How to Use “Create from Usage” in Visual Assist
  1. Just write your code as if the function, variable, or method already exists.

For example:

class MyClass {};

int main()
{
    MyClass obj;
    obj.DoSomethingUseful(); // <- Now Visual Assist can step in!
}

If DoSomethingUseful() hasn’t been declared or defined yet, Visual Assist will detect this.

  1. Place your cursor on the symbol (e.g., method or variable) you just used.
  2. Press Alt+Shift+Q (Visual Assist Quick Action menu)

Alternatively, right-click the symbol and look for Quick Actions and Refactorings ? Create from Usage.

Quick Actions and Refactorings menu items

 

Create method -- Visual Assist

 

Visual Assist will offer to generate the corresponding declaration and definition for you—automatically placing them in the appropriate header and source files if needed.

Declared method in MyClass

Tip:

This feature is especially useful when you’re doing test-driven development or writing out logic before formalizing structure. It keeps your flow uninterrupted by letting Visual Assist handle the boilerplate creation.

Conclusion

Visual Studio is a powerful IDE—but as your projects grow, so do the cracks in its default experience. From sluggish file navigation and limited refactoring tools to IntelliSense breakdowns in Unreal Engine projects, these friction points can quietly eat away at your productivity.

That’s where Visual Assist steps in.

Whether you’re building AAA games in Unreal Engine, managing sprawling C++ projects, or simply tired of lag and limitations, Visual Assist provides the tools to help you code faster, smarter, and more confidently. With features like fuzzy symbol search, advanced refactoring, code suggestions, and context-aware navigation, Visual Assist fills in the gaps and removes the roadblocks that slow you down.

Most importantly, it integrates seamlessly into your workflow—no steep learning curve, no drastic changes. Just better performance, deeper code understanding, and a smoother development experience.

If you’ve been struggling with Visual Studio performance, now you know: Visual Assist has the fix.

Download a free trial of Visual Assist and experience the difference for yourself.

 

The post Struggling with Visual Studio Performance? Visual Assist Has the Fix first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/visual-studio-performance-fix-with-visual-assist/feed/ 0 4170
C++ Modules: What it promises and reasons to remain skeptical https://www.wholetomato.com/blog/c-modules-what-it-promises-and-reasons-to-remain-skeptical/ https://www.wholetomato.com/blog/c-modules-what-it-promises-and-reasons-to-remain-skeptical/#respond Fri, 18 Apr 2025 09:55:32 +0000 https://www.wholetomato.com/blog/?p=4158 Introduction C++ has never been afraid of complexity—but even for a language known for performance and control, the #include system has seemed like a bygone from another era. Modules in C++ were a long-awaited upgrade...

The post C++ Modules: What it promises and reasons to remain skeptical first appeared on Tomato Soup.

]]>
Introduction

C++ has never been afraid of complexity—but even for a language known for performance and control, the #include system has seemed like a bygone from another era.
Modules in C++ were a long-awaited upgrade aimed at cleaning up the mess of includes, speeding up build time, and making large-scale C++ development a bit less painful.

Standardized in C++20 and expanded in C++23, modules promise big gains in compile times. But as of 2025, they’re still not as widely adopted in most teams’ toolchains. Some developers are diving in and seeing real benefits. Others are holding back, citing spotty compiler support, tricky build integration, and the reluctant to face the learning curve that comes with any paradigm shift.

This post isn’t about selling you on the latest trend or convention—it’s a practical look at what C++ modules actually offer today, where the limitations still lie, and which cases it makes sense to adopt them. Decide for yourself later on.

A Quick Primer on C++ Modules

If you’ve worked with C++ for more than five minutes, you’ve dealt with header files. They’re powerful, but can also add noise: full of macros, guard clauses, and redundant includes that slow down compilation and make dependency tracking a chore. Modules were introduced in order to alleviate some of these issues.

At a high level, C++ modules replace the traditional preprocessor-based #include model with a cleaner, more structured system. Instead of copy-pasting code into translation units, modules compile once, then import—reducing repeated parsing and giving compilers more context to optimize builds.

How C++ Modules Work

A module interface is a standalone file—usually with the .ixx extension—that declares what’s available to other parts of your program. You can then import this module in other files using the import keyword (just like how it works Python), bypassing the need for header files entirely.
Behind the scenes, the compiler builds and caches the module interface, so future builds can skip reprocessing its contents—saving time and keeping things tidy.

Timeline at a Glance

  • C++20, officially published in December 2020, introduced official module support, though early compiler implementations were partial.
  • C++23, released in February 2023, expanded the spec, offering better support for features like module partitions and header unit compatibility.
  • Toolchains like Clang, MSVC, and GCC continue to evolve their support—but as of 2025, full interoperability is still a work in progress.

C++ module adoption timeline

Arguments for Adopting C++ Modules

If you’ve ever watched a massive C++ project crawl through compilation—or spent hours untangling a web of includes and macros—then the case for modules probably sounds pretty appealing. Here’s where they shine.

Improved Build Times and Scalability

Traditional C++ compiles every translation unit independently, parsing the same headers repeatedly across your codebase. That’s a lot of duplicated effort.
With modules, compilers can parse once and cache the results (just like how Visual Assist does it!). Module interfaces are precompiled and reused, cutting down redundant parsing.
On large projects, this can lead to significant reductions in full build and incremental compile times, especially when combined with modern build systems that understand modules.
This isn’t just theoretical—early adopters have seen real gains when porting to modules, particularly in libraries with thousands of files and deep dependency chains.

Cleaner Dependencies

Modules bring much-needed structure to C++. They reduce reliance on preprocessor directives and eliminate include guards, forward declarations, and subtle header-only bugs. In fact, they encourage you to think more clearly about what should be exposed and what should stay private.
Since you explicitly export only what’s needed, modules help enforce encapsulation, making APIs easier to maintain and less prone to unexpected breakage.

Improved IDE and Tooling Support

While not all editors are fully up to speed yet, modern IDEs and compilers are catching up. Visual Studio, Clang-based tools, and even some lightweight editors are beginning to provide meaningful module-aware features—like faster IntelliSense, smarter indexing, and fewer false-positive diagnostics.
Once your toolchain supports modules well, you’ll notice a smoother developer experience, particularly when working in large codebases.

Modernization and Future-Proofing

Adopting modules isn’t just about shaving off build minutes—it’s about aligning with the future direction of the language. As more modern C++ features lean into modules (like std::mdspan in C++23), developers who adopt early will be better positioned to take advantage of new capabilities.
Modules are also a gateway to cleaner build systems, more granular dependency management, and even more secure code, thanks to their ability to restrict symbol visibility and reduce accidental API exposure.

Industry Trends and Early Adoption

While modules haven’t reached critical mass yet, they are gaining traction. Library developers and performance-focused teams are leading the way, especially those building SDKs, game engines, or systems software where build time is a bottleneck.
We’ve also seen big names like Microsoft experiment with module adoption in parts of their standard library implementation, and some open-source projects have already migrated small parts of their code to test the waters.

Why you may want to delay adopting C++ Modules (for now)

For all the promise that C++ modules bring, real-world adoption is still, well… cautious. Developers aren’t exactly lining up to refactor their entire codebase just yet — and there are good reasons why.

Not much incentive to adopt

Even in greenfield projects, introducing modules comes with a learning curve. But in legacy codebases? Migration can be daunting. You’ll need to rethink your header structure, untangle tight coupling, and manage new build system dependencies — not to mention retraining your team. And then there’s the question of compatibility: modules don’t play nicely with everything, particularly if you rely heavily on macros, conditional compilation, or platform-specific headers.


In other words, this isn’t a weekend refactor — and for many teams, the payoff doesn’t yet outweigh the cost and it would make more sense to use modules on new projects instead.

Tooling Inconsistencies and Fragmentation

Ask any developer who’s attempted to go modular: Which compiler are you using? matters more than it should. While support for modules exists in Clang, MSVC, and GCC, it’s not uniform — and version-specific quirks can introduce frustrating inconsistencies.


Build system support is also in flux. While CMake has added module support, it still feels experimental, especially for complex project setups or cross-platform builds. Other systems like Bazel or custom build pipelines require more glue code than most teams want to maintain.
In short: the tooling isn’t fully there yet — especially if you’re not using the absolute latest compiler versions.

Lack of Ecosystem Maturity

Even if your toolchain is up to date, the broader ecosystem might not be. Most third-party libraries aren’t shipping with module interface units, which means you’re either stuck writing your own wrappers or falling back to #include anyway. That limits the benefits of going modular in mixed environments — which, let’s face it, is most environments. Until popular libraries (Boost, Qt, etc.) begin offering reliable module support, most teams can’t go all-in without making sacrifices.

Limited Real-World Case Studies

There’s still a lack of detailed success stories when it comes to large-scale adoption. Some early adopters have shared benchmarks or migration notes, but most real-world examples are small experiments, not full production shifts.


Without broader case studies to learn from, many teams are taking a “wait and see” approach — watching how others fare before diving in themselves.

Stability Concerns

The C++ modules ecosystem is still evolving. Compiler behavior can change between minor versions, module-related bugs pop up in tooling updates, and build system support continues to shift.


This kind of churn makes it hard to commit to modules in production, especially in enterprise environments where stability is everything.

Situations Where Modules Might (or Might Not) Be Worth It

C++ modules aren’t an all-or-nothing deal — and thankfully, you don’t have to rip out every #include to start using them. Depending on your project, team size, and tooling setup, modules might either be a smart optimization or an unnecessary complexity. Let’s break it down.

 When Modules Make Sense

  • You’re starting a new codebase (especially at scale)
    Greenfield projects are the perfect playground for modern C++. If you’re building a large system from scratch, modules let you start clean — without legacy header baggage. Organizing your code as modular interfaces from the beginning can make maintenance, scalability, and onboarding much easier.
  • You maintain a modern toolchain
    If your team is already using the latest versions of GCC, Clang, or MSVC — and you’re comfortable updating your toolchain regularly — you’re in a better position to benefit from the improved compile times and structure that modules offer.
  • You’re building reusable libraries
    Modules are a natural fit for API design. If you’re developing shared components, SDKs, or internal packages, defining module interfaces can help enforce encapsulation and create cleaner, more predictable dependencies.
  • You have a strong DevOps/infrastructure team
    Getting modules to play nicely with CMake or your CI pipeline isn’t always straightforward. Teams with dedicated infrastructure support can manage the learning curve more effectively and are better equipped to deal with compiler quirks or build system tweaks.

When You Might Want to Hold Off

  • You’re working with a legacy codebase
    Old code doesn’t like change. Migrating headers, untangling circular dependencies, and retrofitting module maps can eat up time with little visible payoff — especially if you’re also juggling deadlines.
  • Your build system isn’t ready
    If your project relies on complex or deeply customized builds, introducing modules can introduce instability rather than speed. Even popular tools like CMake are still maturing their module support, and not all workflows are smooth yet.
  • You rely heavily on third-party libraries
    Until widely used libraries start shipping module interface units, your modules will live in an awkward coexistence with #include. This kind of hybrid environment can be frustrating and lead to confusing bugs or duplicated efforts.
  • Your team is small or early-stage
    If you’re moving fast and shipping often, taking time to restructure code for modules might not be worth the effort right now. Simplicity usually wins in the early days — and headers still work just fine.
  • Community Perspectives and Industry Signals
    While C++ modules continue to mature, much of their momentum—and hesitation—comes from the wider community: compiler vendors, standards committees, open-source maintainers, and developers who’ve dipped their toes in and reported back. Let’s explore what the broader C++ ecosystem is saying about modules in 2025.

Summary: Key Considerations Before Making a Choice

As we wrap up, let’s briefly recap the main points and outline what you should consider before diving into C++ modules:

Pros of Adopting C++ Modules

  • Improved build times: If you’re working with large codebases, the performance gains from reduced redundant parsing can be significant.
  • Cleaner dependencies: Modules eliminate many of the headaches associated with header file inclusion, such as tangled macros and circular dependencies.
  • Tooling support: While still evolving, most major compilers (MSVC, Clang, GCC) are heading in the right direction, and IDE support is growing.

Cons of Adopting C++ Modules

  • Fragmented tooling: Support across compilers and build systems is still inconsistent. If you’re using a particular toolchain, check for full compatibility before diving in.
  • Migration cost: Moving an existing project to modules involves significant changes in build systems, dependencies, and possibly code itself.
  • Lack of third-party support: If your project relies heavily on external libraries, check whether they support modules, or be prepared for some custom workarounds.
  • Limited case studies: The adoption rate of modules, especially in large-scale real-world projects, is still low, meaning the learning curve could be steeper than expected.

When Should You Adopt C++ Modules?

  • New codebases or projects: If you’re starting fresh or adding new features to a project, adopting modules early could save you time in the long run.
  • Open-source libraries: If you’re maintaining a widely-used library, moving to modules could lead to performance improvements that benefit the community.
  • Legacy codebases: If you’re dealing with a large, established project, the effort to migrate to modules may not be justified unless you have the resources to support it.

Ultimately, adopting C++ modules in 2025 depends on your project’s size, complexity, and long-term goals. It may be worth experimenting with modules on smaller, isolated parts of your project to gauge their potential before committing to a full-scale migration.

Add more support for modules in C++

If you’re on the fence about using C++ because of the relatively limited tooling available for it, consider adding the Visual Assist plugin for Visual Studio. In a recent update, it added recognition when declaring new modules into your project. This added support makes C++ modules easier to work with with the navigation and auto suggest features working as you’d expect.

The post C++ Modules: What it promises and reasons to remain skeptical first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/c-modules-what-it-promises-and-reasons-to-remain-skeptical/feed/ 0 4158
Visual Assist 2025.1 release post https://www.wholetomato.com/blog/visual-assist-2025-1-release-post/ https://www.wholetomato.com/blog/visual-assist-2025-1-release-post/#respond Mon, 31 Mar 2025 15:52:53 +0000 https://www.wholetomato.com/blog/?p=4133 VA 2025.1 enhances usability with smarter navigation, better C++ module support, and more flexible refactoring options. The updated first-run dialog, configurable test snippets, and a refreshed UI improve the overall experience. Additionally, several key fixes...

The post Visual Assist 2025.1 release post first appeared on Tomato Soup.

]]>
VA 2025.1 enhances usability with smarter navigation, better C++ module support, and more flexible refactoring options. The updated first-run dialog, configurable test snippets, and a refreshed UI improve the overall experience. Additionally, several key fixes address navigation issues, assignment suggestions, and UI inconsistencies, ensuring a more stable and efficient development environment.

Download the release now from our website download page.

VA Integration modes: Updated First Run Dialog

In VA 2024.9, new integration modes were added to allow users to personalize their experience with how Visual Assist features were presented and accessed. You can visit the integration mode page to learn more about available integration modes. This dialog was initially shown for fresh installs only. 

VA 2025.1 makes the dialog appear for every user who has not previously encountered it, regardless of whether they are installing Visual Assist for the first time or have updated from an earlier version.

The first run dialog allows users to pick VA integration modes.

Option to exclude symbols in GoTo and List Methods navigation

This small tweak adds an option to skip selecting symbols after you navigate to it. In that way, you can immediately start typing before the symbol, or you would be able to keep your current selection even after jumping to different parts of the code.

This currently works for VA’s Go To and List Methods in Current File (Alt + M). Access the new option via the toolbar.

Open the options dialog to select symbol selection behavior.

Specify access level on Extract Method

VA introduces a new option that allows developers to specify the access level (public, private, or protected) directly when using the Extract Method refactoring tool.

Specify the visibility of methods obtained via Extract Methods using the new options.

This streamlines the refactoring process by providing an immediate choice of access level for the new method being created from the selected block of code. Previously, after extracting a method, the default access level was applied (usually private), and any changes to this required manual adjustment. 

With this update, developers can set the desired access level in the initial step of the extraction, ensuring better code organization and encapsulation from the outset.

New features added for C++ modules when importing

When declaring new modules into your project, VA will recognize what you are trying to do and core navigation and features will work accordingly. This includes autocompletion prompts, adding includes, finding references, and other pertinent navigations.

C++ modules were added in C++ 20 to help improve the compilation times and the overall build performance of C++ programs. Modules provided a modern alternative to traditional header files and includes by allowing programmers to define interfaces that are compiled separately and imported as needed. 

This reduces the need to include headers and recompile code unnecessarily, which can significantly speed up the build process. 

Modules in C++ are fairly new and the committee is still pushing for mass adoption. But whether you’re an early adopter of C++ modules or not, this VA update should help you find available modules should the need arise.

VA now parses C++ modules, enabling core navigations and features.

Support for *.IXX module files.

This change allows VA to parse and understand the new modular structure introduced with C++20. This means that developers can now work with module interface files (.ixx) directly within the Visual Assist environment, benefiting from features like syntax highlighting, code navigation, and intelligent code completion that were previously limited to traditional header and source files.

For instance, if you had symbols declared in an .IXX file, VA now properly parses them and navigation features such as Go To will now work properly.

Configurable snippet base for unit test generation

There are new configuration options available for Unit Test Generation that allow developers to customize the boilerplate code that is automatically generated when creating unit tests. 

The unit test generation feature was first introduced in VA 2024.9 and added a new feature to create a boilerplate that follows the Google Test framework. This creates a new test file, prepopulated with placeholders following the test structure to make it more convenient to users.

VA 2025.1 upgrades this new feature with the flexibility of specifying preferences and settings that align with their project’s requirements or personal coding standards.

New modernized tomato icon 

Our loveable tomato icon has been given a fresher look for the new development year! This was primarily done to improve user experience and accessibility. This change was made to increase contrast, and make VA’s features more distinguishable so users can utilize it more effectively in the IDE.

new whole tomato visual assist logo 2025

Updated tomato icon. Will be rolled out for every platform!

We’ve also taken the opportunity to maintain a consistent look and feel across all instances of our tomato icon. This update ensures that they appear correctly and uniformly across all platforms.

Excluding C# files from parsing via “settings.json” file.

VA 2025.1 builds upon a similar functionality introduced in VA 2022.4 where an option to consider configuration instructions outlined in a .json file can be used to skip unnecessary parsing when building solutions. 

This new feature does something similar, but for C# instead. The feature allows developers to specify which C# files should be excluded from parsing by Visual Assist through a configuration in a .json file.

This is particularly useful for developers working cross-platform as this tells Visual Studio and Visual Assist to “open a file but do not parse anything else apart from a specific part.” 

So even if users have dozens of non Visual Studio files in one directory, you can specify which files are part of the project you are trying to open. (Otherwise, VS and VA will try to parse the whole directory—very resource intensive and time consuming.)

Bug Fixes

For bug fixes and general improvements, most of them were based on user feedback and reports. The most notable of these updates are fixes for a crash happening when logging is enabled while debugging, and a hang involving the Go To features. There was also a pesky bug related to having two-monitor setups that is now fixed. 

The following list summarizes the most important bugs addressed in this release:

Fix for flashing in the Find References results window on start or when changing monitors.

  • Fix for Encapsulate field in C#.
  • Fix for VA Hashtags not being suggested.
  • Fix for assignment suggestions not appearing in some cases.
  • Fix for dialog hang that could sometimes happen when using Goto.
  • Increased the display limit for Move Method to Base Class to 12 base classes (from 6).
  • Fix for Move Method to Base Class sometimes not displaying the base class list to move to.
  • Fix for tip of the day links opening in Internet Explorer rather than the default browser.
  • Fix for a crash that could sometimes happen when troubleshoot logging is enabled.
  • Fix for attributes displaying in a difficult to read color when in dark mode.

Availability & Feedback

This release was made generally available on March 28th and can be downloaded via the downloads page. As always, we appreciate feedback, especially on recently introduced features and the UI changes we introduced. Thank you for helping us create a better experience for all our users.

Update now to an active version to utilize all the features and fixes in this release. And if you have any questions or encounter any issues, feel free to reach out to support@ewholetomato.com.

The post Visual Assist 2025.1 release post first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/visual-assist-2025-1-release-post/feed/ 0 4133
Introduction to CUDA development + How to set up with Visual Studio https://www.wholetomato.com/blog/intro-to-cuda-and-visual-studio-installation/ https://www.wholetomato.com/blog/intro-to-cuda-and-visual-studio-installation/#respond Wed, 05 Feb 2025 15:35:51 +0000 https://www.wholetomato.com/blog/?p=4040 Introduction Think about this. Have you ever thought about two things at once? If you reflect a bit, our brains are super complex but they only focus on one train of thought. Sure, a lot...

The post Introduction to CUDA development + How to set up with Visual Studio first appeared on Tomato Soup.

]]>
Introduction

Think about this. Have you ever thought about two things at once? If you reflect a bit, our brains are super complex but they only focus on one train of thought. Sure, a lot can happen subconsciously, but you can only be conscious about a single thing—you can’t focus on two things at once simultaneously.

But what if you could? This opens up a wide array of possibilities. Imagine learning from multiple sources, or solving three math equations in your head simultaneously, or literally multitasking with each hand doing something different.

That’s the idea behind how graphics processing units (GPUs) are being utilized to fast track development time for a few specialized technologies. With its capability to process significantly more threads (vs CPUs), they can execute tasks that require heavy parallel processing, such as rendering graphics, training machine learning models, and running complex simulations.

And one of the ways to program your GPUs to spit out data that isn’t just graphics is via a framework called CUDA. And that’s what we’re talking about in this blog today.

Why is CUDA being used now

CUDA, which stands for Compute Unified Device Architecture, speeds up computing tasks by using the power of graphics processing units (GPUs). It is a framework developed by NVIDIA in 2006. CUDA allows developers to write programs that divide large computing tasks into smaller ones using parallel computing. 

This uses the many cores of a GPU to perform multiple calculations simultaneously—unlike a CPU, which uses a few powerful cores optimized for sequential processing. This parallel processing capability significantly speeds up tasks that involve large datasets or complex computations, such as those found in graphics rendering, scientific simulations, and machine learning.

Nvidia’s CUDA has been around for more than two decades and due to the popularity and inherent compatibility with its parent company’s physical video cards, it has emerged as one of the leaders in the industry. And even though CUDA’s chokehold on the space is breaking, it remains a top choice for accelerating training for machine learning models.

Industries using CUDA 

We’ve talked about the advantages of using GPUs and how you can use CUDA to program them to work on specific tasks. The most popular use case now is the rise of machine learning and AI, but we’ve listed down a couple of other industries that you may not know about that can also utilize the advantage of GPU computing power.

Industry Task / Work Needed How CUDA-enabled programs help
Data Science & AI Deep learning training, NLP, recommendation systems Speeds up training of AI models exponentially, helping with things like chatbots and recommendation algorithms.
High-Performance Computing (HPC) Scientific simulations, physics calculations Speeds up complex science experiments and research.
Finance Risk modeling, high-frequency trading (HFT), portfolio optimization Computes complex financial calculations much faster which helps traders make quick decisions.
Autonomous Vehicles Object detection, sensor fusion, path planning Helps self-driving cars “see” and react to their surroundings in real time.
Manufacturing & Industrial Automation Predictive maintenance, defect detection, robotic control Helps machines spot problems before they happen and improves automation.
Weather & Climate Science Climate modeling, hurricane prediction, data assimilation Runs weather simulations much faster to improve forecasts.
Cybersecurity Anomaly detection, encryption/decryption, threat analysis Helps detect hackers and secure data faster.
Robotics Real-time sensor processing, AI-based control, SLAM (Simultaneous Localization and Mapping) Helps robots process what they see and move more accurately.
Blockchain & Cryptography Cryptocurrency mining, transaction validation Makes mining cryptocurrencies and securing transactions faster.

Challenges in learning CUDA development

While utilizing GPUs and programming them with CUDA is a rising framework, there is still a significant barrier to becoming a skilled CUDA programmer. Its biggest strength is also one of its complicating factors in learning. CUDA is designed for parallel computing, which is fundamentally different from traditional serial programming. Programmers need to grasp concepts like threads, blocks, and grids, and how they map to GPU hardware.

In addition to that, C/C++, a lower level language usually suited for intermediate developers, is arguably the language to learn if you want to maximize programming in CUDA (You can also opt for Python using PyTorch or Jax).

Lastly, CUDA requires a deeper knowledge on physical hardware (aka what NVIDIA GPU/s you’re using). There is extra setup involved both in hardware and software toolkits to access basic development and testing. Achieving high performance will also require studying the GPU architecture and careful optimization of code and tight memory management.

Setting up your first CUDA programming project

A CUDA .cu file with proper syntax highlighting and code analysis features opened in Visual Studio.

Starting with your first ever CUDA project may seem daunting but with the right directions, you can easily configure Visual Studio for CUDA programming projects in just an hour. Follow these steps below to get started:

Installing Visual Studio

Visual Studio is a good first option for an IDE if you are familiar with C++ already. It is compatible with the integration of the NVIDIA CUDA Toolkit which allows you to compile, debug, and optimize CUDA applications within the same platform.

  • Download Visual Studio

    First, download Visual Studio from Microsoft. Choose whatever edition you prefer. For our installation, we downloaded a community version of Visual Studio 2022 for as it’s the latest supported version for our Windows 11 system. 
  • Run the installer to complete the installation

    Follow the succeeding prompts until you get to the Visual Studio installer. It will ask you for a couple of things such as install directory and will check a couple of dependencies. Afterwards, you should be able to launch Visual Studio from this Window or from a shortcut.

Installing the CUDA Toolkit

With Visual Studio now installed, you will need the CUDA Toolkit download for Visual Studio. It provides the tools, libraries, and compiler (nvcc) needed to develop and run CUDA applications within Visual Studio. It enables integration for GPU-accelerated computing, which allows use of NVIDIA GPUs for high-performance tasks.

  • Verify you have a CUDA-compatible GPU
    To ensure smooth operations, first check if your current GPU is a supported device. You can do this by navigating to the Display Adapters section in the Windows Device Manager. For more information, visit NVIDIA’s install guide. 
  • Download CUDA Toolkit from NVIDIA

    Visit NVIDIA’s website to download and learn more about the toolkit. Before downloading, ensure that you have chosen the correct OS, version, etc. The download file in our case is 3.2 GB but please ensure you have at least 10 GB of free space as you still need to temporarily extract the installation files before running the installer.

  • Run the installer

    After downloading, run the installer. It will scan your device for any missing dependencies or pre-existing installs and adjust your installation files accordingly. Afterwards, you will now have the CUDA Toolkit installed on your system. Additionally, NSIGHT which provides debugging and profiling features specific for CUDA applications will also be installed.

    If you encounter any issues with installing the toolkit, consult NVIDIA’s installation and troubleshooting guide.

    Bonus tip: If you prefer Visual Studio Code, you should install Nsight from this link instead. It’s an application development environment for “heterogeneous platforms that brings CUDA development for GPUs” into Microsoft’s Visual Studio code instead.

Getting started with your first CUDA project in Visual Studio

After installing both Visual Studio and the CUDA toolkit, you are now ready to initialize your first project within Visual Studio.

  • Creating a new project.
    Start by opening Visual Studio and create a new project or clone an existing repository to start your first project file.
  • Initializing your project.

    At this point you have two options: either start a completely blank console/project or choose the CUDA 12.8 project. The main difference is that the CUDA Runtime comes pre-equipped with the usual workloads, sample code, and use cases.However, starting from scratch allows you to configure your project with only what you need and it also familiarizes you with the workspace. For this project, we’ll start with a completely blank project.
  • Setting your build configuration

    On the top of the Visual Studio window, choose Release and x64 (if you’re running a 64-bit system). This tells VS that we’re trying to build a version of an app that can be deployed, as opposed to debugging. 
  • Adjusting build dependencies

    You need to ensure that Visual Studio knows that you’re trying to build and execute CUDA files. To configure this, right click on your project name (“CUDA Sample”) and click on Build Dependencies ? Build Customizations. A new window will pop up that lists down available build customization files—be sure to tick CUDA 12.8 and hit ok.

  • Adding a CUDA C++ or Header file

    To add new source files, simply add new items as you would add any normal .cpp or .header file. Right click on a folder and click on AddNew Item to access your file options. 
  • Verifying file and project setup is correct
    At this point, we suggest trying to build a solution to ensure that everything is working smoothly. If nothing breaks, congratulations! You can now start working on your first CUDA file inside VS. NVIDIA also provides a few sample projects so you can test, debug, and familiarize with the setup using existing projects before creating a new one entirely.

Optimizing your setup

VS and NVIDIA have made giant strides in making CUDA development easier to access and set up. However, as CUDA is a proprietary language, there may still be some missing syntax highlighting or confused prompts from VS’s IntelliSense. 

To alleviate this, it is recommended to install supplementary plugins from the Visual Studio marketplace that can help with properly highlighting symbols. For example, you can download and install the Visual Assist plugin which adds support for CUDA-specific code that Visual Studio’s IntelliSense might not recognize yet. It also comes with the added benefit of providing its core features of navigation, refactoring, code assistance, and more, on top of the added support for .cu and .cuh files.

visual assist for C++ CUDA development

The Visual Assist plugin adds support for recognizing CUDA-specific code. VA recognizes you are using a symbol that references a missing header file and adds it for you.

Conclusion

While CUDA is a powerful tool that is likely to remain significant in the near future, the landscape of parallel computing is dynamic, and its dominance will depend on technological advancements and shifts in industry needs. But given the rapid growth of AI and machine learning, CUDA is likely to remain relevant due to its optimization for deep learning tasks, especially as NVIDIA continues to innovate in this space. 

In summary, if you’re looking to expand on your software development skills into a growing and forthcoming space, then learning CUDA could be it for you. 

The post Introduction to CUDA development + How to set up with Visual Studio first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/intro-to-cuda-and-visual-studio-installation/feed/ 0 4040
Visual Assist 2024.9 release post https://www.wholetomato.com/blog/visual-assist-2024-9-release-post/ https://www.wholetomato.com/blog/visual-assist-2024-9-release-post/#respond Tue, 31 Dec 2024 08:19:33 +0000 https://www.wholetomato.com/blog/?p=4025 Happy Holidays! Visual Assist 2024.9 makes its way to general availability this holiday season!  This update introduces a key update to Find References and a new refactoring. We are also introducing a new way to...

The post Visual Assist 2024.9 release post first appeared on Tomato Soup.

]]>
Happy Holidays! Visual Assist 2024.9 makes its way to general availability this holiday season! 

This update introduces a key update to Find References and a new refactoring. We are also introducing a new way to experience Visual Assist—more on this below! And of course, thanks to your feedback, we also have bug fixes and general QoL improvements.

Visit our website page and download the release now.

Replace Find References Tree Control

Whenever you execute a Find References command, the results are shown in  a dialogue at the bottom of the windows. In 2024.9, a portion of the results dialog and the logic behind it was overhauled to (1) make the UI display results faster and (2) to add the ability to search and filter through those results.

Before this update, it may sometimes take a half second or so to display all the references and symbols as the UI tries to catch up with the greatly improved Find References speed.  Now, when you are working with large projects or code bases, there will be minimal lag even as the parser incrementally adds hits to the results dialog.

Additionally, as a result of the overhaul, there is a new feature that allows users to actively search through the found results, even as the primary search is still ongoing. You don’t have to wait for the complete search in order to interact with the results.

Move Method to Base Class (New refactoring)

A new refactoring has been added: the Move Method to Base Class is a powerful tool for improving the design and maintainability of your code. This feature allows you to take a method that was originally implemented in a child (or derived) class and move it to a base (or parent) class.

This essentially transfers the method implementation, for instance, to the base class, and updates the derived classes to remove the redundant implementation. This makes derived classes smaller and more manageable—and thus, more maintainable, more readable, and overall cleaner code.

New Visual Assist Integration Modes 

This release introduces two new available integration modes for the Visual Assist plugin. The available integration modes allow users to personalize their experience with Visual Assist. The two available modes are partial integration and full integration mode.

Partial integration sets fewer features on by default and will not change default key mappings—a more vanilla Visual Studio experience. This may be useful for those using Visual Assist for a few key features, or for those who are accustomed to the default VA experience.

Full integration is the recommended setting as it embodies the experience that VA was designed for. It exposes all code completion, code navigation, autosuggestion features, and the like. Furthermore, it also exposes some of our less apparent features more.

One of the main purposes of this mode is to make it easier to find and familiarize with the features inside Visual Assist. This applies even for beginners as they can see more and use more of the available features and functions.. 

Additionally, it’s the more flexible option as it can be easier to disable a few things manually but keep everything else.  As such, you can consider full integration as the setting that maximizes your experience of all the benefits Visual Assist has to offer. And on the other hand, consider partial integration as the classic version that keeps development a bit more zen with fewer buttons and shortcuts to learn.

New “Ray” style row indicator

Visual Assist’s way of highlighting the current selected line/row now has a new option. This makes the current line appear a bit more unique, and gives it a thinner edge appearance. To be more precise, we added a new style unique to the current iteration in Visual Studio. The new style utilizes a “ray” top and bottom line going across the editor. 

Unit Test Code Generation feature

For those following the Google Test framework, you can use this new feature to create boilerplates to skip the tedium of setting up the test framework and verifying your test’s structure. With just a few clicks, you can create a new test file, pre-filled with test structure and essential placeholders, saving you significant time and effort.

To use this feature, just activate the feature on a class, and VA will create a new file with the foundation you need to start writing tests.

Availability & Feedback

This release is available starting December 30 and can be downloaded via the Whole Tomato downloads page. As always, we encourage your feedback, especially on recently introduced features, to help us make a better experience for you.

Thank you for your continued support, happy holidays and happy coding! If you have any questions or encounter any issues, feel free to reach out to our support team.

Download the release now.

The post Visual Assist 2024.9 release post first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/visual-assist-2024-9-release-post/feed/ 0 4025
Test Driven-Development and UI/UX Design: A Practical Guide [Webinar Recap] https://www.wholetomato.com/blog/tdd-unit-testing-ui-guide/ https://www.wholetomato.com/blog/tdd-unit-testing-ui-guide/#respond Sun, 22 Dec 2024 14:12:54 +0000 https://www.wholetomato.com/blog/?p=4029 Don’t you wish your code came with an undo button for every mistake? So do all developers who accidentally pushed a bug into production! But we got the next best thing: Unit testing. This webinar...

The post Test Driven-Development and UI/UX Design: A Practical Guide [Webinar Recap] first appeared on Tomato Soup.

]]>
Don’t you wish your code came with an undo button for every mistake? So do all developers who accidentally pushed a bug into production!

But we got the next best thing: Unit testing. This webinar will show you how to stop breaking your codebase (and your spirit) by writing tests that catch errors before they escape into the wild. Perfect for developers who know they should test but don’t know how—or why.

What You’ll Learn:

  • The differences between two schools of TDD and when to use them.
  • How to implement CI pipelines and automate your test execution.
  • Practical techniques for leveraging static analysis tools and code profiling.
  • Real-world case studies that highlight successful approaches to refactoring and performance optimization.

In this webinar, our experts shared their best practices for developing high-quality C++ code, offering valuable insights to apply in your projects.

This webinar features insights from experts in software design and development, covering practical applications and real-world scenarios to help you streamline your workflows.

This webinar has concluded. Scroll down to watch the replay and review the highlights.

Webinar Replay

Webinar Highlights

Introduction

0:19-1:35: About Nuno: product manager for Visual Assist, clean code enthusiast, contact info shared, alongside mission of Visual Assist and upcoming new version announcement.

Message and Story

1:40-5:12: Importance of programmers writing good quality software and Nuno’s experience with different software development approaches (design thinking, waterfall, agile).

Test-Driven Development Overview

5:12-8:10: Discovery of test-driven development (TDD) and its impact on software quality. Explanation of TDD and the Red-Green-Refactor cycle. Importance of small increments, immediate feedback, and other TDD benefits.

Practical Exercise Setup

8:17-10:09: Overview of the Mars Rover exercise, rules, and references.
10:09-11:00: Visual Studio 2022 setup for the Mars Rover project (source files and test project creation).

First Test Case

11:00-12:08: Writing the first test: Initial position at (0, 0), facing north.
12:08-13:11: Creating the Rover class and implementing execute() to return an empty string initially.
13:11-16:16: Making the test pass by returning the expected position and direction.

Second Test Case

16:16-18:15: Writing the second test: Rotating right from north to east.
18:15-20:09: Updating Rover to handle the “right rotation” command and making the test pass.

Refactoring and Patterns

20:09-20:59: Recognizing patterns in the test code and introducing Google Test fixtures for code reuse.
50:06-52:11: Introducing and implementing a current position variable. Writing and running tests to confirm functionality after the changes.
52:11-53:28: Extending functionality to the left method and replicating the test-driven approach used for the right method.
54:00-55:18: Cleaning up and optimizing the code after successful test results, ensuring all tests remain green.
56:00-56:48: Summary of the refactoring process and demonstration of the final Rover and Direction class setup.

QnA

[56:48–59:02]
Introduction to the Q&A session with Nuno Castro and Ian Barker. The discussion opens with strategies for writing tests for projects without existing tests. Suggestions include starting with end-to-end tests and gradually adding component-specific tests during future changes.

GUI Tools, A/B Testing, and Metrics

[59:02–1:03:07]
Overview of GUI testing tools like SmartBear’s TestComplete and their use in desktop and web testing. The discussion transitions into A/B testing, explaining its purpose and real-world examples (e.g., Coca-Cola product testing). The importance of metrics to gauge feature usage before redesign or development is also highlighted.

Agile Methodologies and Encouragement for TDD

[1:03:07–1:06:50]
Reflection on Agile methodologies, balancing speed with system stability, and evolving approaches such as Facebook’s shift from “move fast and break things” to prioritizing reliability. The session concludes with encouragement to adopt Test-Driven Development (TDD) and a nod to the value of unedited coding demos to showcase realistic problem-solving.

Self-Development, Testing, and TDD Approaches

[1:10:01–1:13:36]
Introduction to self-development as both a science and an art. Discussion includes testing strategies to ensure business logic isn’t broken, addressing overfitting in tests, and balancing test coverage with real-world solutions. User stories are highlighted as a foundation for design, followed by a comparison of the Chicago and London schools of TDD.

Design, User Experience, and Business Logic

[1:13:36–1:17:01]
Emphasis on designing user interfaces first and iterating on user experience challenges. The discussion incorporates Don Norman’s insight that user errors often indicate interface design issues. It concludes with balancing business logic with test coverage in TDD.

Closing

[1:17:01–1:18:00]
The importance of prioritizing timely application releases over perfectionism is discussed. The webinar ends with closing remarks, thanks to participants, replay information, and a final farewell.

The post Test Driven-Development and UI/UX Design: A Practical Guide [Webinar Recap] first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/tdd-unit-testing-ui-guide/feed/ 0 4029
How to Query File Attributes 50x faster on Windows https://www.wholetomato.com/blog/how-to-query-file-attributes-50x-faster-on-windows/ https://www.wholetomato.com/blog/how-to-query-file-attributes-50x-faster-on-windows/#respond Thu, 14 Nov 2024 15:52:55 +0000 https://www.wholetomato.com/blog/?p=4010 Imagine you’re developing a tool that needs to scan for file changes across thousands of project files. Retrieving file attributes efficiently becomes critical for such scenarios. In this article, I’ll demonstrate a technique to get...

The post How to Query File Attributes 50x faster on Windows first appeared on Tomato Soup.

]]>
Imagine you’re developing a tool that needs to scan for file changes across thousands of project files. Retrieving file attributes efficiently becomes critical for such scenarios. In this article, I’ll demonstrate a technique to get file attributes that can achieve a surprising speedup of over 50+ times compared to standard Windows methods.

Let’s dive in and explore how we can achieve this.

This is a blog post made in collaboration with Bartlomiej Filipek from C++ stories. You can visit his blog here.

The inspiration

The inspiration for this article came from a recent update for Visual Assist – a tool that heavily improves Visual Studio experience and productivity for C# and C++ developers.

In one of their blog post, they shared:

The initial parse is 10..15x faster!

What’s New in Visual Assist 2024—Featuring lightning fast parser performance [Webinar] – Tomato Soup

After watching the webinar, I noticed some details about efficiently getting file attributes and I decided to give it a try on my machine. In other words I tried to recreate their results.

Disclaimer: Idera, the company behind Visual Assist, helped me write this post and sponsored it.

Understanding File Attribute Retrieval Methods on Windows

On Windows, there are at least a few options to check for a file change:

  • FindFirstFile[EX] – with Basic, Standard and LargeFetch options
  • GetFileAttributesEx
  • std::filesystem
  • GetFileInformationByHandleEx

Below, you can see some primary usage of each approach:

FindFirstFileEx

FindFirstFileEx is a Windows API function that allows for efficient searching of directories. It retrieves information about files that match a specified file name pattern. The function can be used with different information levels, such as FindExInfoBasic and FindExInfoStandard, to control the amount of file information fetched.

WIN32_FIND_DATA findFileData;
HANDLE hFind = FindFirstFileEx((directory + "\\*").c_str(), FindExInfoBasic, &findFileData, FindExSearchNameMatch, NULL, 0);

if (hFind != INVALID_HANDLE_VALUE) {
    do {
        // Process file information
    } while (FindNextFile(hFind, &findFileData) != 0);
    FindClose(hFind);
}

Additionally you can also pass FIND_FIRST_EX_LARGE_FETCH as the additional flag to indicate that the function should use a larger buffer which might bring some extra performance.

GetFileAttributesEx

GetFileAttributesEx is another Windows API function that retrieves file attributes for a specified file or directory. Unlike FindFirstFileEx, which is used for searching and listing files, GetFileAttributesEx is typically used for retrieving attributes of a single file or directory.

WIN32_FILE_ATTRIBUTE_DATA fileAttributeData;
if (GetFileAttributesEx((directory + "\\" + fileName).c_str(), GetFileExInfoStandard, &fileAttributeData)) {
    // Process file attributes
}

GetFileInformationByHandleEx

GetFileInformationByHandleEx is a low level routine that might be tricky to use, but gives us more control over the iteration. The main idea is to get a lerge buffer of data and read it on the application side, rather than rely on sometimes costly kernel/system calls.

Assuming you have a file open, which is a directory, you can iterate over its children in the following way:

while (true) {
    if (!GetFileInformationByHandleEx(
        hDir,
        FileFullDirectoryInfo,
        pInfo,
        sizeof(buffer))) {
        DWORD error = GetLastError();
        if (error == ERROR_NO_MORE_FILES) {
            break;
        }
        else {
            std::wcerr << L"GetFileInformationByHandleEx failed (" << error << L")\n";
            break;
        }
    }

    do {
        if (!(pInfo->FileAttributes & FILE_ATTRIBUTE_DIRECTORY)) {
            FileInfo fileInfo;
            fileInfo.fileName = std::wstring(pInfo->FileName, pInfo->FileNameLength / sizeof(WCHAR));
            FILETIME ft{};
            ft.dwLowDateTime = pInfo->LastWriteTime.LowPart;
            ft.dwHighDateTime = pInfo->LastWriteTime.HighPart;
            fileInfo.lastWriteTime = ft;
            files.push_back(fileInfo);
        }
        pInfo = reinterpret_cast<FILE_FULL_DIR_INFO*>(
            reinterpret_cast<BYTE*>(pInfo) + pInfo->NextEntryOffset);
    } while (pInfo->NextEntryOffset != 0);
}

std::filesystem

Introduced in C++17, the std::filesystem library provides a modern and portable way to interact with the file system. It includes functions for file attribute retrieval, directory iteration, and other common file system operations.

for (const auto& entry : fs::directory_iterator(directory)) {
    if (entry.is_regular_file()) {
        // Process file attributes
        auto ftime = fs:last_write_time(entry);
        ...
    }
}

The Benchmark

To evaluate the performance of different file attribute retrieval methods, I developed a small benchmark. This application measures the time taken by each method to retrieve file attributes for N number of files in a specified directory.

Here’s a rough overview of the code:

The FileInfo struct stores the file name and last write time.

struct FileInfo {
    std::wstring fileName;
    std::variant<FILETIME, std::filesystem::file_time_type> lastWriteTime;
};

Each retrieval technique will have to go over a directory and build a vector of FileInfo objects.

BenchmarkFindFirstFileEx

void BenchmarkFindFirstFileEx(const std::string& directory, 	
                              std::vector<FileInfo>& files, 
                              FINDEX_INFO_LEVELS infoLevel) 
{
   WIN32_FIND_DATA findFileData;
   HANDLE hFind = FindFirstFileEx((directory + "\\*").c_str(),
                                   infoLevel, 
                                   &findFileData, 
                                   FindExSearchNameMatch, NULL, 0);

   if (hFind == INVALID_HANDLE_VALUE) {
       std::cerr << "FindFirstFileEx failed (" 
                 << GetLastError() << ")\n";
       return;
   }

   do {
       if (!(findFileData.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY)) {
           FileInfo fileInfo;
           fileInfo.fileName = findFileData.cFileName;
           fileInfo.lastWriteTime = findFileData.ftLastWriteTime;
           files.push_back(fileInfo);
       }
   } while (FindNextFile(hFind, &findFileData) != 0);

   FindClose(hFind);
}

BenchmarkGetFileAttributesEx

void BenchmarkGetFileAttributesEx(const std::string& directory,
                                  std::vector<FileInfo>& files) 
{
   WIN32_FIND_DATA findFileData;
   HANDLE hFind = FindFirstFile((directory + "\\*").c_str(),
                                &findFileData);

   if (hFind == INVALID_HANDLE_VALUE) {
       std::cerr << "FindFirstFile failed (" 
                 << GetLastError() << ")\n";
       return;
   }

   do {
       if (!(findFileData.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY)) {
           WIN32_FILE_ATTRIBUTE_DATA fileAttributeData;
           if (GetFileAttributesEx((directory + "\\" + findFileData.cFileName).c_str(), GetFileExInfoStandard, &fileAttributeData)) {
               FileInfo fileInfo;
               fileInfo.fileName = findFileData.cFileName;
               fileInfo.lastWriteTime = fileAttributeData.ftLastWriteTime;
               files.push_back(fileInfo);
           }
       }
   } while (FindNextFile(hFind, &findFileData) != 0);

   FindClose(hFind);
}

BenchmarkStdFilesystem

And the last one, the most portable technique:

void BenchmarkStdFilesystem(const std::string& directory, 
                            std::vector<FileInfo>& files) 
{
    for (const auto& entry : std::filesystem::directory_iterator(directory)) {
        if (entry.is_regular_file()) {
            FileInfo fileInfo;
            fileInfo.fileName = entry.path().filename().string();
            FILETIME ft{};
            ft.dwLowDateTime = pInfo->LastWriteTime.LowPart;
            ft.dwHighDateTime = pInfo->LastWriteTime.HighPart;
            fileInfo.lastWriteTime = ft;
            files.push_back(fileInfo);
        }
    }
}

BenchmarkGetFileInformationByHandleEx

void BenchmarkGetFileInformationByHandleEx(const std::wstring& directory, std::vector<FileInfo>& files) {
    HANDLE hDir = CreateFileW(
        directory.c_str(),
        GENERIC_READ,
        FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE,
        NULL,
        OPEN_EXISTING,
        FILE_FLAG_BACKUP_SEMANTICS,
        NULL
    );

    if (hDir == INVALID_HANDLE_VALUE) {
        std::wcerr << L"CreateFile failed (" << GetLastError() << L")\n";
        return;
    }

    constexpr DWORD BufferSize = 64 * 1024;
    uint8_t buffer[BufferSize];
    FILE_FULL_DIR_INFO* pInfo = reinterpret_cast<FILE_FULL_DIR_INFO*>(buffer);

    while (true) {
        if (!GetFileInformationByHandleEx(
            hDir,
            FileFullDirectoryInfo,
            pInfo,
            sizeof(buffer))) {
            DWORD error = GetLastError();
            if (error == ERROR_NO_MORE_FILES) {
                break;
            }
            else {
                std::wcerr << L"GetFileInformationByHandleEx failed (" << error << L")\n";
                break;
            }
        }

        do {
            if (!(pInfo->FileAttributes & FILE_ATTRIBUTE_DIRECTORY)) {
                FileInfo fileInfo;
                fileInfo.fileName = std::wstring(pInfo->FileName, pInfo->FileNameLength / sizeof(WCHAR));
                FILETIME ft{};
                ft.dwLowDateTime = pInfo->LastWriteTime.LowPart;
                ft.dwHighDateTime = pInfo->LastWriteTime.HighPart;
                fileInfo.lastWriteTime = ft;
                files.push_back(fileInfo);
            }
            pInfo = reinterpret_cast<FILE_FULL_DIR_INFO*>(
                reinterpret_cast<BYTE*>(pInfo) + pInfo->NextEntryOffset);
        } while (pInfo->NextEntryOffset != 0);
    }

    CloseHandle(hDir);
}

The Main Function

The main function sets up the benchmarking environment, runs the benchmarks, and prints the results.

std::wstring directory = argv[1];
const auto arg2 = argc > 2 ? std::wstring_view(argv[2]) : std::wstring_view{};

std::vector<std::pair<std::wstring, std::function<void(std::vector<FileInfo>&)>>> benchmarks = {
    {L"FindFirstFileEx (Basic)", [&](std::vector<FileInfo>& files) {
        BenchmarkFindFirstFileEx(directory, files, FindExInfoBasic, 0);
    }},
    {L"FindFirstFileEx (Standard)", [&](std::vector<FileInfo>& files) {
        BenchmarkFindFirstFileEx(directory, files, FindExInfoStandard, 0);
    }},
    {L"FindFirstFileEx (Large Fetch)", [&](std::vector<FileInfo>& files) {	BenchmarkFindFirstFileEx(directory, files, FindExInfoStandard, FIND_FIRST_EX_LARGE_FETCH);
    }},
    {L"GetFileAttributesEx", [&](std::vector<FileInfo>& files) {
        BenchmarkGetFileAttributesEx(directory, files);
    }},
    {L"std::filesystem", [&](std::vector<FileInfo>& files) {
        BenchmarkStdFilesystem(directory, files);
        }},
    {L"GetFileInformationByHandleEx", [&](std::vector<FileInfo>& files) {
        BenchmarkGetFileInformationByHandleEx(directory, files);
    }}
};

std::vector<std::pair<std::wstring, double>> results;

for (const auto& benchmark : benchmarks) {
    std::vector<FileInfo> files;
    files.reserve(2000); // Reserve space outside the timing measurement

    auto start = std::chrono::high_resolution_clock::now();
    benchmark.second(files);
    auto end = std::chrono::high_resolution_clock::now();

    std::chrono::duration<double> elapsed = end - start;
    results.emplace_back(benchmark.first, elapsed.count());
}

PrintResultsTable(results);

Performance Results

To measure the performance of each file attribute retrieval method, I executed benchmarks on a directory containing 1000, 2000 or 5000 random text files. The tests were performed on a laptop equipped with an Intel i7 4720HQ CPU and an SSD. I measured the time taken by each method and compared the results to determine the fastest approach.

Each test run consisted of two executions: the first with uncached file attributes and the second likely benefiting from system-level caching.

The speedup factor is the factor of the current result compared to the slowest technique in a given run.

1000 files:

Method                         Time (seconds)       Speedup Factor
FindFirstFileEx (Basic)        0.0014831000         162.868
FindFirstFileEx (Standard)     0.0014817000         163.022
FindFirstFileEx (Large Fetch)  0.0011792000         204.842
GetFileAttributesEx            0.2415497000         1.000
std::filesystem                0.0609313000         3.964
GetFileInformationByHandleEx   0.0044168000         54.689

// second run:
Method                         Time (seconds)       Speedup Factor
FindFirstFileEx (Basic)        0.0013805000         44.947
FindFirstFileEx (Standard)     0.0011310000         54.863
FindFirstFileEx (Large Fetch)  0.0009071000         68.404
GetFileAttributesEx            0.0616772000         1.006
std::filesystem                0.0620496000         1.000
GetFileInformationByHandleEx   0.0025246000         24.578

Directory with 2000 files:

Method                         Time (seconds)       Speedup Factor
FindFirstFileEx (Basic)        0.0014455000         150.287
FindFirstFileEx (Standard)     0.0015029000         144.547
FindFirstFileEx (Large Fetch)  0.0012086000         179.745
GetFileAttributesEx            0.2172402000         1.000
std::filesystem                0.0609186000         3.566
GetFileInformationByHandleEx   0.0025069000         86.657

Method                         Time (seconds)       Speedup Factor
FindFirstFileEx (Basic)        0.0012020000         50.908
FindFirstFileEx (Standard)     0.0011614000         52.688
FindFirstFileEx (Large Fetch)  0.0008887000         68.856
GetFileAttributesEx            0.0611920000         1.000
std::filesystem                0.0611760000         1.000
GetFileInformationByHandleEx   0.0025835000         23.686

Directory with 5000 random, small text files:

Method                         Time (seconds)       Speedup Factor
FindFirstFileEx (Basic)        0.0077623000         84.975
FindFirstFileEx (Standard)     0.0828258000         7.964
FindFirstFileEx (Large Fetch)  0.0144611000         45.612
GetFileAttributesEx            0.6595977000         1.000
std::filesystem                0.3022779000         2.182
GetFileInformationByHandleEx   0.0051569000         127.906

Method                         Time (seconds)       Speedup Factor
FindFirstFileEx (Basic)        0.0069814000         43.844
FindFirstFileEx (Standard)     0.0148472000         20.616
FindFirstFileEx (Large Fetch)  0.0140663000         21.761
GetFileAttributesEx            0.3060932000         1.000
std::filesystem                0.3011346000         1.016
GetFileInformationByHandleEx   0.0051614000         59.304

The results consistently showed that FindFirstFileEx with the Standard flag was the fastest method in uncached scenarios, offering speedups up to 129x compared to GetFileAttributesEx. However, in cached scenarios, FindFirstFileEx (Basic and Standard) achieved over 50x speedup improvements. The parameters for “Large Fetch” seems to increase the performance.

For the directory with 2000 files, FindFirstFileEx (Basic) demonstrated a speedup factor of over 179x in the first run and went down to 68 in the second run. In the directory with 5000 files, we can see that GetFileInformationByHandleEx takes crown and acheives 59x speedup, while other techniques reaches 43x max. Notably, std::filesystem performed on par with GetFileAttributesEx .

Further Techniques

Getting file attributes is only part of the story, and while important, they may contribute to only a small portion of the overall performance for the whole project. The Visual Assist team, who contributed to this article, improved their initial parse time performance by avoiding GetFileAttributes[Ex] using the same techniques as this article. But Visual Assist also improved performance through further techniques. My simple benchmark showed 50x speedups, but we cannot directly compare it with the final Visual Assist, as the tool does many more things with files.

The main item being optimised was the initial parse, where VA builds a symbol database when a project is opened for the first time. This involves parsing all code and all headers. They decided that it’s a reasonable assumption that headers won’t change while a project is being loaded, and so the file access is cached during the initial parse, avoiding the filesystem entirely. (Changes after a project has been parsed the first time are, of course, still caught.) The combination of switching to a much faster method for checking filetimes and then avoiding file IO completely contributed to the up-to-15-times-faster performance improvement they saw in version 2024.1 at the beginning of this year.

Read further details on their blog Visual Assist 2024.1 release post – January 2024 and Catching up with VA: Our most recent performance updates – Tomato Soup.

Summary

In the text, we went through a benchmark that compares several techniques for fetching file attributes. In short, it’s best to gather attributes at the same time as you iterate through the directory – using FindFirstFileEx or via GetFileInformationByHandleEx. So if you want to do this operation hundreds of times, it’s best to measure time and choose the best technique. What’s more, if you expect to have lots of files in a directory it’s good to check techniques offering larger buffers.

The benchmark also showed one feature: while C++17 and its filesystem library offer a robust and standardized way to work with files and directories, it can be limited in terms of performance. In many cases, if you need super optimal performance, you need to open the hood and work with the specific operating system API.

Back to you

  • Do you use std::filesystem for tasks involving hundreds of files?
  • Do you know other techniques that offer greater performance when working with files?

Share your comments below. And if you’re using C++, you can also download and try Visual Assist yourself for 30 days for free.

The post How to Query File Attributes 50x faster on Windows first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/how-to-query-file-attributes-50x-faster-on-windows/feed/ 0 4010
C++ versus Blueprints: Which should I use for Unreal Engine game development? https://www.wholetomato.com/blog/c-versus-blueprints-which-should-i-use-for-unreal-engine-game-development/ https://www.wholetomato.com/blog/c-versus-blueprints-which-should-i-use-for-unreal-engine-game-development/#respond Wed, 23 Oct 2024 13:49:33 +0000 https://www.wholetomato.com/blog/?p=3983 Introduction When programming game elements in Unreal, developers have two main options: develop using Unreal’s visual blueprint system or develop using the C++ language.  The Blueprint system in Unreal Engine is a powerful visual scripting...

The post C++ versus Blueprints: Which should I use for Unreal Engine game development? first appeared on Tomato Soup.

]]>
Introduction

When programming game elements in Unreal, developers have two main options: develop using Unreal’s visual blueprint system or develop using the C++ language. 

The Blueprint system in Unreal Engine is a powerful visual scripting tool designed to help developers create gameplay mechanics without needing to write traditional code. Introduced in Unreal Engine 4 to make game development more accessible to non-programmers, Blueprints enable users to build systems by dragging and dropping pre-built nodes, representing code functions. Some developers treat blueprints as the be-all and end-all for programming in Unreal…

…but on the other hand, we have those who advocate C++ and its ability to program almost anything in Unreal. It has performance, versatility, and arguably makes you a better designer because you can control almost every mechanic of the game you are developing. 

In this blog post, we discuss the differences between the two approaches and hopefully it will help more people understand that it’s not an either/or decision and the most effective utilization is to use them to complement each other. 

Getting started: How to install Unreal Engine and Visual Studio

Introduction to Unreal’s Blueprint System

According to Epic, the creators of the Unreal Engine, the Blueprint Visual Scripting system is a “complete gameplay scripting system based on the concept of using a node-based interface to create gameplay elements from within Unreal Editor.”

Before Blueprints, Unreal Engine used a scripting language called UnrealScript (used in Unreal Engine 3 and earlier). While powerful, it required traditional programming knowledge and didn’t cater to artists or designers (which arguably comprise a greater bulk of game development) who needed to iterate rapidly without diving into code.

The idea was to make game development more accessible to a wider range of creators, especially those who weren’t programmers.

Fast forward to the highly acclaimed Unreal Engine 4 which was released in 2014, Epic introduced it with the visual scripting system. The idea was to make game development more accessible to a wider range of creators, especially those who weren’t programmers. Blueprints allowed developers to visually connect logic, making scripting easier and more intuitive. It was essentially UnrealScript’s replacement, offering drag-and-drop functionality to build gameplay systems.

The latest updates in Unreal Engine 5 have taken blueprints one step further. Performance enhancements allow Blueprints to run more efficiently and closer to native C++ speeds, making them more suitable for complex projects. Furthermore, users now have the ability to nativize Blueprint code into C++, offering the best of both worlds by combining visual scripting ease with C++’s runtime performance.

Learn more: Unreal's Beginner's Guide to Blueprints

Quick explainer why C++ is used for Unreal Engine (and game dev)

The primary reason why C++ is used in Unreal development is the same reason why it’s used in game development in general—speed and performance. Additionally, as alluded to in the previous section, Unreal development is essentially programming that uses a lot of C++ macros that combine complex code into more easy-to-use bits.

Generally, the C++ language integrates nicely into the more minute processes you may want to program for Unreal. For instance, it shines when you are processing longer arrays and loops that would otherwise be overwhelming to use using blueprints. You can also use C++ for making custom components and game mechanics that would otherwise be difficult in higher level languages.

There are many more areas and disciplines we can talk about when it comes to C++, but the bottom line is that C++ gives you more control with memory. This consequently means more control over the systems that you can work with when developing your game.

Sample C++ code for an Unreal Engine game project. Syntax highlighting provided by Visual Assist plugin.

Comparing Blueprints and C++

When you are starting out in development in Unreal you will often find a clash of opinions on whether you should learn the blueprints system or dive into it with C++. Some people use C++ or blueprints exclusively—here are two summaries of these two views:

Why people may start with ONLY blueprints:

Blueprints are much easier to pick up. You don’t need to dive into complex code—everything’s visual. You’re basically dragging and connecting nodes to create mechanics, which means you can start building right away. 

There is no need to learn C++ before you can make something cool. If you’re new to Unreal Engine or game development in general, this is a huge plus because you can see results fast, without getting stuck on syntax or debugging.

And here’s the thing: Blueprints were introduced by Epic themselves. Similar to all the options available to you inside the engine, blueprints is a super powerful system that can be used for most game mechanics. 

Unreal Engine has optimized them to run smoothly, and unless you’re doing something really performance-heavy (like complex physics simulations), Blueprints will handle it just fine. You can even do advanced logic in Blueprints—things like AI, UI, and game state management—without needing to touch C++.

The other big advantage is speed—not computing speed, that’s C++’s zone. We’re talking about prototyping speed, especially in the early stages of development. Blueprints lets you iterate faster. You can make changes on the fly, test new ideas, and tweak mechanics without waiting for code to compile or worrying about errors. It’s especially helpful in small teams or solo projects where you need to move quickly and stay creative.

Also, Blueprints make it easier for non-programmers (like designers or artists) to collaborate. If you’re working with others, they can understand and adjust the game mechanics without needing to learn C++. 

Now, that’s not saying Blueprints are the only answer, but for most cases, especially if you’re starting out or need to quickly build and test, they’re perfect jumping boards. You can always add C++ later if you need more control or optimization. But for rapid development, ease of use, and accessibility, Blueprints are a great way to go.

So, why Blueprints? Easy to learn, fast to prototype, powerful for most tasks, and great for collaboration. You can always dive into C++ later, but for getting started and getting things done, Blueprints are more than enough!

Why people may start using ONLY C++:

C++ can sound intimidating compared to Blueprints, which lets you drag and drop things easily. But here’s why C++ is worth the challenge. Think of Blueprints like using LEGO blocks—you can build cool things, but you’re limited to prefabs. You can only build stuff with the pieces you have. What if you wanted to create a curved surface when there’s no curved block available?

In C++, you can make your own custom blocks. Curved, straight, jagged, irregular, all’s available for you to create yourself. You can control every detail of how your game works, especially when you want something that Unreal Engine doesn’t offer by default.

Now, performance. When your game gets complex, like with a huge world or a fast-paced multiplayer, C++ runs circles around Blueprints. It’s just faster, talking directly to your computer’s hardware. Imagine you’re building a or an MMO—C++ will handle massive tasks way better than Blueprints. It’s the difference between a race car and a scooter.

And here’s a big one: the industry loves C++ developers. If you master it, you’re not just a game designer—you’re in high demand. Studios know C++ developers can dig deep into the engine, creating systems that Blueprints just can’t match in complexity or performance. Plus, the skills you learn in C++? They transfer to tons of other tech fields like finance, AI, or data analysis.

C++ is harder, but mastering it means you’ll be able to do anything in Unreal + others. You’re not just stuck building with what’s given—you’re creating from scratch. It’s more control, faster performance, deeper understanding, and wider career options. It’s harder, but trust me, once you learn it, you’ll be unstoppable. 

Summary:

Blueprints C++
Ease of use Beginner friendly: Easier to pick up. Steeper learning curve
Readability Uses visual nodes signifying properties. Easy to understand but gets complicated with increasing number of nodes quickly. Uses C++ code bases and solutions. Requires more knowledge but  a few lines of code can be equivalent to a screen full of blueprints.
Flexibility (use cases) Limited by what is exposed in the Blueprint system; hard to implement highly custom systems. Allows full access to everything under the hood. Access the entire engine with custom mechanics and optimizations.
Performance Fast enough for most cases. Not advisable for complex or critical components  High-performance; handles resource-intensive mechanics more efficiently
Collaboration Easy to understand (even for non-programmers) Usually read and written by C++ programmers only.
Usage Primarily used for rapid prototyping, simple logic, assets, scripts, and visual FX Primarily large, complex systems, performance-critical code, advanced customization, and low-level engine access.
Maintenance Can become unwieldy in large-scale projects; hard to track and refactor visual logic. Easier to maintain in large projects with proper coding practices; easier to refactor and debug.
Integration Built into the Unreal ecosystem, works and compiles into C++ Built into the Unreal ecosystem, works with Blueprints

Now wait a minute… Focus on the last row on integration. Both C++ and the blueprint system are integrated into the Unreal development ecosystem and work with each other? So what should I focus on first? Continue to the next section to find out what our suggestion is on the most optimal way of developing in Unreal.

The Most Optimal Approach for C++ vs Blueprints – Our Suggestion:

Using blueprints and C++ are not exclusive. They are both ways to program mechanics, albeit at different levels. Utilize each according to the task requirements.

If you’re coming into this blog post as a bonafide beginner, (no experience with programming, no experience with Unreal) then the most likely best approach for you is to begin using Unreal’s blueprint system. You can expose yourself to the fundamentals of game development and see where you fit in. Are you going to be a game designer handling assets and world building primarily, or do you see yourself as someone who deals with designing the core mechanics of gameplay? 

Either way, it may be best for you to start with blueprints first as its beginner-friendly learning curve can help you answer these questions.

Now, if you have studied both approaches and have a basic understanding of Unreal development, and you’re looking for an answer to the question: What should I master first? Or which is better to use: BP or C++?

There is a false dichotomy between C++ and blueprints. C++ is a programming language, and Blueprints is a scripting system; you don’t have to use either exclusively. In fact, it’s actually better to use both simultaneously. C++ and Blueprints are integrated and allow easy interoperability. 

C++ is naturally better-suited for implementing low-level game systems, and Blueprints is naturally better-suited for defining high-level behaviors and interactions and for integrating aesthetic assets. But luckily for us, the game engine is designed so that you can jump back and forth between native C++ code and the scripting nodes.

The bottom line is that you can use both. Or you should use both so that you can get the benefit out of both systems.

The best way is to create custom C++ functions or classes. Then connect it all in blueprints.

Here is an example:

Say you need to implement a pathfinding mechanic for a small game board. It’s best to write the pathfinding algorithm logic in C++ where you have the benefit of increased logic density, clarity, easy and powerful debugging etc. then expose that to blueprints where you can call it.

It’s worth noting that blueprints weren’t created as an alternative to writing C++, rather, blueprints were created to compliment complex game systems built in C++, by making it very easy to do things like assigning property values in editor as opposed to hard coding it. So as you get more and more familiar with the engine, try creating systems in C++ that you can then extend in blueprints for a very efficient workflow.

With this in mind, our suggestion is to use blueprints and get exposure to how the engine works, and when you’ve hit a wall of complexity that isn’t feasible with blueprints, you can extract the complex logic to C++ and use blueprint nodes to wrap that logic. 

Visual Assist’s own lead developer, Chris Gardner, shows how you can use C++ to create your own powerup in Unreal’s sample shooter game.

By adopting this hybrid workflow, you leverage the best of both worlds: the power and performance of C++ and the user-friendly nature of Blueprints for rapid iteration and testing. As you evolve in your development skills, this combination will enable you to create more complex and engaging gameplay experiences with greater ease.

Developer Protip: Make C++ Development Even More Simple

A lot of the difficulties in C++ come with learning its syntax and how it connects with what you see in the Unreal Editor. C++ can seem intimidating because of the level of abstraction needed. Developers, especially beginners, need all the support they can get.

Choosing your integrated development environment (IDE) is a fundamental decision when you decide to start learning C++ for Unreal. It contains the basic tools required to write and test your game software. And additionally, it provides nifty support and helpful prompts that can guide you.

If you’re coding using Visual Studio (one of the IDEs recommended by Epic themselves), here’s a must-have plugin for Unreal Engine development: Visual Assist. It is a plugin that was made to help Unreal developers working inside Visual Studio. It helps you navigate huge projects. It replaces some IDE features such as  find references with better alternatives. And it even helps your IDE understand Unreal-specific syntax, giving you essential highlighting and context-aware prompts.

Make Visual Studio work better with Unreal development by using Visual Assist.

Visual Assist’s own lead developer, Chris Gardner, shows how you can use C++ to create your own powerup in Unreal’s sample shooter game.

Conclusion:

In conclusion, navigating the world of game development with Unreal Engine involves understanding the complementary strengths of C++ and Blueprints. While Blueprints offer a user-friendly and visually intuitive approach, allowing developers to quickly prototype and implement gameplay mechanics, C++ provides the performance, control, and complexity necessary for serious projects. By recognizing that these two approaches are not mutually exclusive but rather to be used symmetrically, developers can create more efficient game systems. 

By leveraging the unique benefits of both C++ and Blueprints, you position yourself to create more engaging and polished gameplay experiences. Ultimately, whether you’re a newcomer eager to start building or an experienced developer looking to refine your skills, understanding how to effectively combine these tools will be invaluable in your quest to master Unreal Engine. Hence, It is not a question of C++ or Blueprints, but a statement; C++ AND Blueprints.

The post C++ versus Blueprints: Which should I use for Unreal Engine game development? first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/c-versus-blueprints-which-should-i-use-for-unreal-engine-game-development/feed/ 0 3983
Visual Assist 2024.7 release post https://www.wholetomato.com/blog/visual-assist-2024-7-release-post/ https://www.wholetomato.com/blog/visual-assist-2024-7-release-post/#respond Tue, 01 Oct 2024 16:53:41 +0000 https://www.wholetomato.com/blog/?p=3971 We are excited to announce the release of 2024.7 of Visual Assist! This update introduces several powerful features aimed at improving your coding efficiency and project navigation. Download the release now. Here’s a breakdown of...

The post Visual Assist 2024.7 release post first appeared on Tomato Soup.

]]>
We are excited to announce the release of 2024.7 of Visual Assist! This update introduces several powerful features aimed at improving your coding efficiency and project navigation. Download the release now.

Here’s a breakdown of what’s new in this version:

New Features:

1. Context-sensitive Naming in Quick Action and Refactoring menu items (Shift + Alt + Q)

The Quick Action and Refactoring menu is a powerful menu that shows different options depending on the context and the placement of the text caret. For instance, it changes depending on whether you are on a symbol, include directive, or whitespace—and with or without a selection.

In this release, the menu will now take into consideration other symbols and applicable features that will make naming much more intuitive and inclusive of more possible actions. This also marks the start of making some of our menus more ubiquitous. 

2. Improved Read and Write Reference Highlighting

This adds an option to disable the highlighting of references when it is not needed. This is to improve readability and reduce visual clutter.

Specifically, read and write references will now stop highlighting as soon as you move your mouse away from the reference. This keeps your workspace clean while maintaining the ability to easily locate references when needed.

Visual Assist’s highlighting.

3. Feature: Sort Methods in Source (Beta)

We’re excited to release the Sort Methods in Source feature as a beta version! This feature allows you to quickly organize and sort methods in your source files, making it easier to keepnavigate large codebases organised. We welcome your feedback as we refine this feature for future updates.

4. Feature: Promote Lambda to Method (Beta)

The Promote Lambda to Method refactoringfeature is now available in beta. This feature allows you to easily convert lambda functions into regular methods, helping to streamline your code structure and improve readability. 

It is particularly useful for instances where you would like to reuse the same function in other places in the code. This feature takes that lambda and promotes it to a method in the corresponding class for it. 

Test it out and let us know how we can improve it! The settings can be found under the refactorings menu under Extensions under VisualAssistX, or just right click while in a lambda.

5. VA Nav Bar Dropdown for Project Switching

Switching between projects has never been easier! The VA Navigation Bar now allows you to seamlessly switch between multiple projects within your solution. This improvement makes project navigation faster and more intuitive—especially when working in large, multi-project environments that may have distinct code undefined in other workspaces.

6. New reserved string for VA code snippets

VA’s code snippets will now have more reserved string keywords for finding specific parent folders. Reserved strings are keywords that automatically expand when a VA Snippet is invoked. A reserved string obtains its value from an IDE setting, project property, system setting, or surrounding code.

In this case, we added a reserved string for automatically inputting the directory to the Cmake parent folder.

Reserved strings are grouped by type, and can be inserted in the VA Snippet editor via context menu, toolbar button, or keyboard shortcut (Ctrl+I).

7. Option to Adjust Overwrite Behavior When Accepting a Completion

Set up overwriting behavior options Visual Assist Extensions Options ? Enhanced Listboxes.

You can now choose how Visual Assist handles suggestions from a listbox. For instance, when you’re typing in between a word, VA would suggest to complete what you are typing. 

There are different results depending on whether the symbol succeeding the caret is known. If it is not a known symbol, VA overwrites the entire text after the caret with the auto-complete suggestion.

In any case, you can now choose the behavior of overwriting by navigating to the Visual Assist Extensions Options ?  Enhanced Listboxes ? Overwrite text when accepting from a listbox.

 

Bug fixes and improvements:

  • Fixed error when renaming items in particular cases
  • Fixed Move implementation header file error when on first line
  • Fixed quick info menu not showing all options properly when enabled.

 

? Availability & Feedback

This release is available starting September 30  and can be downloaded via the Whole Tomato downloads page. As always, we encourage your feedback, especially on beta and alpha features, to help us continue improving and delivering the best experience for developers.

Thank you for your continued support and happy coding! If you have any questions or encounter any issues, feel free to reach out to our support team.

The post Visual Assist 2024.7 release post first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/visual-assist-2024-7-release-post/feed/ 0 3971
Making a case for investing in software tools: convincing yourself, your team, and your boss https://www.wholetomato.com/blog/making-a-case-for-investing-in-software-tools-convincing-yourself-your-team-and-your-boss/ https://www.wholetomato.com/blog/making-a-case-for-investing-in-software-tools-convincing-yourself-your-team-and-your-boss/#respond Fri, 27 Sep 2024 20:31:36 +0000 https://www.wholetomato.com/blog/?p=3958 Productivity is hard to calculate but there is a simple prerequisite to look out for: are your developers comfortable with their workstation. See how a team persuaded their management to invest in Visual Assist.

The post Making a case for investing in software tools: convincing yourself, your team, and your boss first appeared on Tomato Soup.

]]>
Introduction

Visual Assist has been a longtime partner for coding in Visual Studio. It adds missing features and sometimes even replaces the default features in the IDE. In fact, you can even argue that Visual Assist had a direct influence as to how some of the features in Visual Studio panned out.

But what makes Visual Assist (VA) such a compelling purchase? And what about it makes it a worthwhile software to continue using?

In this blog post, we share a story of how a small company that invested in VA a long time ago still remains staunch VA users despite numerous new alternatives available. Read on to find out what it is that keeps them renewing each year.

We found Ryan, a user who was director at a small software company developing games. They were the type of person who wanted to make sure that his team (no matter how small) had access to the best tools and resources needed to deliver good quality in reasonable delivery time.

The key word here is reasonable. Their reasoning was that in order to create “high quality” work, they had to foster a working environment and workstation that made it easy to be productive. He didn’t feel obligated to have ultra high-end PCs, posh offices, or crazy setups but they did invest into software until work felt easy and frictionless.

For Ryan, a frictionless workstation means that they had access to sophisticated enough tools that allowed them to focus on innovating and problem solving. They had built a reliable set of software: modeling tools, profilers, code analyzers, and coding assistants that made work comfortable—they didn’t have to do things 100% manually, they had tools smart enough to minimize their actions, and they could automate simple and repetitive tasks.

Making navigation faster and easier: Visual Assist’s Find Symbol

In the span of collecting and adding to their suite of software, they found Visual Assist, a productivity plugin for Visual Studio. They had a pain point in navigating projects that made their daily experience with their IDE cumbersome and uncomfortable, ergo bad for productivity.

Specifically, they were looking for a “find symbol” type navigation for Visual Studio C++. Particularly when they were browsing a large codebase, and they wanted to find some specific functionality but they did not know exactly what it will be called or where it will be. They needed a dialog box that would search for any symbol across opened and unopened workspaces and reactively respond and filter based on the string the user starts to type in. They expect the dialog to show classes and files (and much more) matching what the string query is.

The problem was that, while it was available in Visual Studio, the search results had to be searched manually across a scattered list of possible dialogs that were searched by: files by name, only symbols in the currently opened file, symbols in all opened files, and text across files (was experimental then).

Furthermore, it was unsuitable because of the matching and search algorithm that the default IDE is using. They needed something that can understand a more abstracted and an unexact version—they were looking for a ubiquitous search dialog with fuzzy search that performed well even on large code bases. 

That’s when they found Visual Assist. Here’s a quick comparison of how Visual Assist compares with the default IDE. 

The native find symbol feature in Visual Studio

The native find symbol feature in Visual Studio.

Visual Assist’s improved find symbol dialog. Provides more options.

By happenstance, it was recommended to them from an external developer and it fit exactly what they were looking for. It also did not mess with existing work pattern (and muscle memory) because it was just a plugin that added or augmented their current IDE for C/C++ and C# (i.e. easy deployment).

The Visual Assist plugin they added had a more comprehensive, powerful, and sophisticated search dialog that was both as performant as it was smart. It had fuzzy searching that made project navigation simpler. And it also had a much more intuitive and easy-to-use UI wherein they only have to click to configure instead of grappling with multiple different dialogs (minimized required actions).

Fifteen minutes saved daily, becomes an hour saved weekly, and becomes almost an entire workday’s worth in one and a half months.

Discovering something unexpected: Visual Assist’s Code Refactoring

After a few months of using Visual Assist, Ryan and team then discovered that their newly acquired plugin was a solution for another problem they didn’t know they had. This is one of those cases wherein before you discover that there was a better way to do it, you wouldn’t know how inefficient you were. 

The phantom pain point they had was maintaining code. Refactoring (translating and maintaining) code bases was a cumbersome and eye-straining process. It dealt with unfamiliar, often outdated code. While working on some deprecated projects or shelved projects, they had to update existing source to more modern C++ standards or more scalable code styles. This often involved manual checking and error-prone manual techniques.

With Visual Assist’s code refactoring and navigation support, the team was able to reduce code duplication and augment their intention actions when applying refactoring techniques. 

For example:

  • Read unfamiliar code as if they were your own: There is a feature that allows users to extract a method from a long function, after which users can either refactor, rewrite, rename, and reuse the method. They no longer had to fully comprehend unfamiliar code (e.g. code that was just inherited from a colleague no longer on the team) just to refactor them for the current project.
  • Get method placeholders instantly: Write out a class declaration and then have Visual Assist write the stubs for the member definitions for you in the corresponding source file, rename variables across the whole project,
  • Find and jump to declarations: Search for declarations/definitions faster than IntelliSense can find them, open a file anywhere in the solution in only a few keystrokes, etc. It’s quite handy and easy to use.

VA’s Renaming feature which shows instances of a variable, its context, and available options in one convenient dialog. This made searching, refactoring code, and writing new code faster by about 20%.

It’s like discovering a new shortcut to your office that makes your daily commute a few minutes faster—it seems like a marginal gain, but you realize it’s a task you do on a daily basis. 

That’s what they found with Visual Assist. Even if they weren’t actively looking for it, regular and continuous usage opened new opportunities to optimize their refactoring process. After discovering it, it would be difficult to revert to the original, more lengthy process.

To summarize:

    • It adequately solved multiple pain points.
    • It was inexpensive.
    • It was easily deployable to their current workflow.
    • It made their workflow more comfortable and efficient.

Making a case for investments in software

Unfortunately, there are many software companies that miss the importance of providing tools like these. Or in other words, they may not deem these as necessities as much as their developers. There are two ways to look at it. You are either the developer/end user, or you are the C-level procurement officer.

The goal of this post (apart from sharing a success story) is to show how to present a case to management that developers benefit from a comfortable environment, and that would require some investment in software. Otherwise, it’s like giving someone a hammer with no nails and expecting a house to be built in a reasonable time. Craftspeople need good tools. You might have a hammer and nails, but what if your only hammer is a rubber mallet? You’d be incredibly happy to finally get a metal nail hammer.

When you present a case (or when a case is presented to you), the normal reaction is to expect a numerical prediction of the returns. But as you may have surmised from the two examples we mentioned above, it’s not that simple for software tools:

  • It’s hard to quantify productivity.
  • There are some things you can only discover after using the tool.

However, that is not to say that it’s impossible to make a case for a tool. In Ryan’s case, his team was dealing with frustration in locating and navigating certain symbols. That’s expected because in Unreal’s sample shooter game alone, there are around 30 000 defined symbols, 1200 files and headers, and even more references to and from your symbols. What more could a full-fledged project be? 

At that point, it was obvious that simple navigation was a friction point in their daily workflow. If your hammer had a bendable handle you could argue that it’s still usable and could be enough to help build a simple structure. But with each successive swing, frustration and fatigue builds up. This also happens with developers in front of their computer. 

Now, even without a numerical representation, it becomes easier to convince decision makers that this tool is worth it just by simply observing how comfortable it makes developers in their workstation. (That’s why software tools often have free trial periods!)

Finally, it’s important to note that not unlike the coding development environment, tools and plugins require mastery as well. When you make a case for investments, it’s important to note that the value of a tool increases over time as users become more accustomed to it.

Before Ryan’s team first discovered VA’s refactoring features, the team had to rely on their own expertise and knowledge to refactor code bases. They first had to understand it themselves and then they had to rewrite code based on the latest coding standards and guidelines.

Overtime, they found that VA was intelligent enough to not only make navigating and reading code easier, it could actually do it for them. If you’ve ever done any coding or similar thinking-heavy tasks, you’d know that that brief moment can make you lose that train of thought you had—that happens often during coding. But with a coding assistant like Visual Assist, you get intelligent dialogs that show you everything you need to know. You can get suggestions as you write, meaning you get timely prompts so you can focus. You can even get it to write blocks of code for you automatically.

Here’s the bottom line: a refactoring tool like VA reduces distracted time and increases their productivity by letting them focus on the real essence of the application (the code) and less on the plumbing (jumping from page to page  for a single symbol definition).

Conclusion

There is no singular approach to finding out what or what not to invest in. And similarly, there is no magic bullet that will fix all productivity problems. But Ryan’s mindset on what his team needed and how he perceived the impact of a solution is a great start. 

The key takeaway here is how important it is to have a comfortable workstation because you can radically lower your productivity and your quality output if you’re not provided adequate tools to fulfill your job. Apart from price and other technical factors, buying decisions should also be based on how much it benefits you and your team.

Try Visual Assist

Interested in getting the same benefits for you or your team?

Whether you’re looking to boost your team’s productivity or optimizing your own development process, you can try out Visual Assist for yourself and understand yourself why Ryan’s team continue to use it until today.

 

The post Making a case for investing in software tools: convincing yourself, your team, and your boss first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/making-a-case-for-investing-in-software-tools-convincing-yourself-your-team-and-your-boss/feed/ 0 3958
Success Story: Visual Assist for modeling and simulation software for automotive C++ https://www.wholetomato.com/blog/visual-assist-automotive-c/ https://www.wholetomato.com/blog/visual-assist-automotive-c/#respond Thu, 26 Sep 2024 17:50:44 +0000 https://www.wholetomato.com/blog/?p=3927 About the Client Based in Europe, the client is a global company specializing in the development and manufacturing of high-performance systems for vehicle technology. As a company that has been in the industry for over...

The post Success Story: Visual Assist for modeling and simulation software for automotive C++ first appeared on Tomato Soup.

]]>
About the Client

Based in Europe, the client is a global company specializing in the development and manufacturing of high-performance systems for vehicle technology. As a company that has been in the industry for over a century, their longstanding focus on innovation has positioned them as one of the top automotive manufacturers worldwide. As part of their commitment to quality, they have invested heavily in simulation tools for vehicle design, testing, and validation, ensuring efficiency and reliability for their partner manufacturers.

services offered by company

They engineer and produce various automotive technologies such as engine and electronics systems for passenger cars, commercial vehicles, and data measurement services.

 

Use case and challenges

We had the privilege of speaking with the lead developer and his team who create modeling and simulation software. We discussed their daily work and the challenges they face:

Use Cases:

  • They develop C++ applications in Microsoft’s Visual Studio for internal use.
  • They create bespoke programs for modeling components and simulating them in various scenarios.
  • Their primary language is using C/C++ in Visual Studio because it can be interfaced easily.

Challenges:

  • As an advanced tech provider, their workflow and output is highly specialized. Each project is tailor-made specifically for a certain client or customer.
  • They have huge legacy code bases that they have to maintain and modernize. 
  • Because of the precision involved in measurements, they handle large amounts of data from different sources of measurement.

Solution

Visual Assist was introduced to the team many years ago and it has since been a staple tool used daily by the developer team. They use Visual Assist for a variety of use cases including:

  • Refactoring and modernizing code is exponentially faster.
    Because their toolchain was initially built sometime in the 60’s, they had a lot of code modernization and translation projects. Then they also had to integrate them with new tools and update them to the latest coding standards.

    Visual Assist’s refactoring feature has been an indispensable asset in updating the outdated code structures, making them more readable, memory-safe, and maintainable. It takes the pain out of manually bringing legacy or deprecated code up to standard by automatically renaming variables or extracting methods, reducing the risk of introducing errors during manual updates. This includes refactoring to use modern, secure and safe coding styles. Effectively Visual Assist simplifies their C++ code maintenance so that they can focus on manufacturing and designing parts, not code.
  • Navigating old code and huge projects happens in a single click.
    Visual Assist greatly helps the team get around their huge legacy projects with smart navigation features. Finding and searching for certain sections of code is a cumbersome ordeal that VA just completely skips over with features like Find References, Find Symbols, the various Go To functions, and the like.
  • Snappier performance on large projects and solutions.
    When it comes to handling large amounts of data, Visual Assist’s optimized startup speed and low memory footprint provides the team snappy and accurate code assistance. Due to the repetitive nature of their projects, the few seconds that Visual Assist saves compounds over time and can boost productivity by as much as 20%. 

This non-exhaustive list is a testament to how Visual Assist can save hundreds of hours of valuable productivity time by providing smart suggestions, speedy features, and a satisfying experience for the Visual Studio IDE.

Interested?

Interested in getting the same benefits you or your team? Visual Assist is free to try for thirty days. 

Whether you’re looking to boost your team’s productivity or optimizing your own development process, now’s the perfect time to upgrade your toolkit with one of the most trusted Visual Studio plugins. Click on the link below to learn more about Visual Assist.

The post Success Story: Visual Assist for modeling and simulation software for automotive C++ first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/visual-assist-automotive-c/feed/ 0 3927
Getting started with how to use C++ for embedded systems in financial services https://www.wholetomato.com/blog/getting-started-with-how-to-use-c-for-embedded-systems-in-financial-services/ https://www.wholetomato.com/blog/getting-started-with-how-to-use-c-for-embedded-systems-in-financial-services/#respond Mon, 23 Sep 2024 16:56:12 +0000 https://www.wholetomato.com/blog/?p=3919 In today’s fast-paced financial technology landscape, the demand for robust, high-performance software is increasing. At the core of the majority of financial innovations lies C++, a language revered for its speed, efficiency, and control.  As...

The post Getting started with how to use C++ for embedded systems in financial services first appeared on Tomato Soup.

]]>
In today’s fast-paced financial technology landscape, the demand for robust, high-performance software is increasing. At the core of the majority of financial innovations lies C++, a language revered for its speed, efficiency, and control. 

As financial institutions continue to incorporate advanced electronics and embedded systems into their operations—be it through the ATMs we rely on for banking transactions, the sophisticated high-frequency trading platforms, or the secure transaction systems that protect our finances, C++ has become an indispensable tool.

Embedded systems are central to the proliferation of financial services which require real-time processing capabilities that only a highly performant language like C++ can provide. The financial sector’s demands for speed, precision, and security make C++ the language of choice for developers tasked with building the systems that underpin our financial infrastructure.

In this blog, we explore how C++ is used in these mission-critical financial systems. We’ll examine why it is suitable for embedded systems in finance.

Embedded systems in financial services

What are embedded systems?

Embedded systems are specialized computing systems designed to perform dedicated tasks within larger devices or systems. Unlike general-purpose computers, they are optimized for specific functions, often operating with real-time constraints and limited resources. Common examples of embedded systems include automotive control units, medical devices like pacemakers, and home appliances such as microwaves or washing machines. These systems are crucial in industries requiring precise control and efficiency, even outside the financial sector.

How embedded apps and digitalization are transforming financial software

The primary driver of the increasing demand for embedded systems is digitalization. Or to be more specific, inevitable progress in tech is opening more ways to serve underbanked communities; these opportunities require more and more digital alternatives to traditional banking. 

About two decades ago, the fintech model relied on singular banks serving a whole community. Today, every business is expected to accept payments through digital platforms, credit cards, and other payment platforms. This has minimized the red tape and payments and financial services have become more seamless.

For instance, e-wallets and banking apps on smartphones have certainly made financial services easier to access, however, physical devices must still be available for businesses to use as terminals and portals for digital transactions. This is where embedded systems on devices come in.

Examples of Embedded Systems used in financial services

Point-of-Sale (POS) Systems

POS systems are ubiquitous in retail stores, restaurants, and other businesses that accept payments. These systems integrate embedded processors and software to handle various functions like:

  • Accepting credit/debit card payments
  • Tracking inventory and sales data
  • Generating receipts and reports

POS terminals are essentially embedded computers designed for payment processing and business management.

ATMs (Automated Teller Machines)

ATMs are self-service banking kiosks that contain embedded systems in the form of peripheral devices. Embedded systems help the main PC operating system manage the user interface, cash dispenser, and card reader. It can also communicate with the bank’s central computer system.

Contactless Payment Terminals

Contactless payment terminals are embedded systems that enable customers to make payments by tapping or waving their credit/debit cards or mobile devices near the terminal. These terminals use near-field communication (NFC) technology and are commonly found at retail checkouts and transit fare gates. Smartwatches, fitness trackers, and other wearable devices can be embedded with payment capabilities.

Section 2: C++ in finance and banking

Why financial embedded systems use C++

Embedded systems use C++ because it lets developers control hardware directly while still keeping the code organized and easier to manage.There is a good mix of low-level hardware control and high-level programming abstractions. 

C++ is great for devices with limited memory or processing power, like small sensors or controllers, because it helps the code run fast. It also allows developers to write code that can work on different types of devices without starting from scratch. This makes C++ a popular choice for many embedded systems. Additionally, C++ offers portability, making it easier to adapt code across different embedded platforms.

The demands of financial software

In the financial sector, software systems face exceptionally high demands. These systems must deliver extreme performance, steadfast reliability, and robust security to support critical functions like real-time trading, transaction processing, and risk management. The stakes are incredibly high, as even minor software failures can result in significant financial losses, security breaches, and a loss of client trust. 

C++ is well-equipped to meet these rigorous requirements. Renowned for its speed and efficiency, C++ enables developers to create high-performance applications crucial for environments where every millisecond can impact trading results. Its low-level memory control allows for precise management of system resources, ensuring both stability and responsiveness in financial systems. Additionally, C++ is supported by a comprehensive suite of libraries designed for complex financial operations, making it an ideal choice for developing secure and high-performing financial software.

Advantages of the C++ language in Financial Software

C++ Property How it compares to other languages used in finance
Lower level language: C++ code compiles into highly efficient machine-like code, providing real-time processing capabilities and scalability.  Faster than interpreted languages like Python or JavaScript, which are unsuitable for real-time performance requirements.
Speed and performance:Handles intensive computational tasks with minimal overhead, making it ideal for high-performance applications. 
Similarly popular in finance programming, Python offers simplicity and faster development cycles. However, it lacks the execution speed needed for high-performance financial software. 
Embedded-Specific Support: (e.g., no-exception builds) allows you to disable certain features (like exceptions) to minimize overhead. Languages like Java have less flexibility in trimming down features for embedded use.
Scalability and processing power: Can accommodate increasing volumes of data and transactions, a necessity in a growing financial sector.
Java strikes a balance between usability and performance but cannot match the raw processing power and system control that C++ provides.

Section 3: The challenges for C++ programmers developing embedded systems

In the high-stakes world of financial systems, performance optimization is not merely an option but a critical necessity. Financial applications, such as high-frequency trading platforms and real-time risk management systems, operate under intense performance constraints where even the smallest delay can have significant repercussions. As a result, C++ developers are tasked with continuously fine-tuning their code to meet performance requirements.

One of the primary challenges in this optimization process is managing memory. C++ provides low-level control over memory allocation, which allows for precise performance tuning but also demands that developers manually handle memory management. This responsibility includes careful allocation and deallocation to prevent memory leaks and ensure efficient resource utilization. 

Additionally, reducing latency is crucial in financial applications where timely processing of data and execution of trades are essential. Developers must implement strategies to minimize latency, which involves optimizing algorithms, data structures, and reducing the impact of I/O operations. Productivity enhancing tools such as Visual Assist C++ that simplify refactoring help here immensely as they can help spot unnecessary elements—more on helpful tools later. 

Maintaining code quality while optimizing performance presents another challenge. Performance enhancements often require low-level changes to the code, which can complicate readability and maintainability. Balancing the need for high performance with the necessity of keeping the codebase understandable and manageable is a continuous struggle for C++ developers working in the finance sector. 

Readability is an often underestimated facet of development. Embedded code can often be hard to read, or drop from C++ to lower-level C. For instance, when accessing IO pins on an embedded device via a cable plugged into “general purpose IO pins” (GPIO) you have to use the base-level language that can communicate with the hardware itself.  At that point, it’s key to have tooling that helps you understand and verify your code when you run it back from higher and lower abstraction between languages.

As simple as possible: C++ vs Embedded C++

When discussing C++ versus Embedded C++, it’s essential to understand that while they share a common language foundation, the environments in which they are applied significantly influence the design, usage, and constraints of these two variants.

The main difference with C++ in embedded systems is that it has to be more efficient because devices often have limited memory and processing power. Embedded C++ also involves directly controlling hardware, like sensors and processors, which isn’t as common in traditional C++. Finally, some C++ features, like dynamic memory management, are used less or even avoided entirely in embedded systems to avoid performance issues. Rather than using the standard STL, it’s common to use other libraries tailored for embedded use, like the ETL.

  • Memory management and constraints

C++ on a desktop or server system operates in a much more forgiving environment. It has access to extensive memory, high processing power, and can rely on an operating system for memory management and multitasking. In contrast, Embedded C++ targets microcontrollers or other resource-constrained devices, where memory (both RAM and flash) is limited, and there may not be an operating system at all.

For instance, in an embedded system, dynamic memory allocation using new and delete can be risky due to fragmentation, leading to memory exhaustion over time. Many embedded systems developers avoid heap allocation entirely, preferring static or stack allocation, or using custom memory management techniques tailored to the system’s constraints.

Some devices  such as ATMs or POS systems need a small amount of flash memory, a form of non-volatile memory, to keep a small database. For example, some systems need to keep the past 24 hours of transactions on the system itself as a backup for when the bank network has gone down unexpectedly. For these cases, reliable memory-efficient libraries for compression and embedded databases are used.

  • Performance and real-time requirements

Another significant difference arises in performance and real-time behavior. In standard C++ applications, performance is still important, but not necessarily tied to hard real-time requirements.

In contrast, embedded systems often have strict timing constraints, and code must execute within a specific time frame to meet system requirements. This demands careful optimization and the avoidance of certain C++ abstractions that can introduce unpredictable execution times.

For example, C++ standard library features like the Standard Template Library (STL) may not be suitable for embedded environments. Functions like std::vector or std::map can introduce hidden memory allocations and performance overhead, which can be detrimental in a real-time system. 

As a result, embedded C++ developers often resort to using lightweight custom libraries or writing their own data structures optimized for their specific hardware. You can use libraries like the embedded template library that provides STL-like functionality intended for embedded devices. You can also search this list of libraries from Github user “fffaraz” using the search term “embedded” for more resources specific to embedded systems.

  • Hardware Interfacing

Embedded systems often require precise control over hardware peripherals, like I/O pins, timers, or communication interfaces. This entails hardware-specific code, where developers directly manipulate memory-mapped registers to control the device.

In standard C++, you rarely deal with such low-level hardware specifics. Embedded C++ developers, however, often need to interact directly with hardware registers and bit manipulation, as shown in the examples with the ATM or POS systems. This introduces a level of complexity not typically found in standard desktop or server C++ development.

  • Debugging Challenges

Due to the very embedded nature of embedded systems, debugging is inherently more complex due to the lack of typical debugging resources available in standard C++ environments. Desktop developers can rely on sophisticated debuggers, full IDEs, and graphical interfaces to step through code, inspect memory, and trace program execution. In contrast, embedded developers often work without these luxuries. 

Debugging tools may be limited to physical devices that plug into the circuitry, or maybe testers and emulators that merely simulate the device. The best case scenarios will involve some form of rudimentary debugging tool integrated into the device. But for the most part, it will still be a step down from traditional C++ debugging.

Section 4: Pro tips for C++ developers for embedded systems

If you’re a novice developer or an intermediate C++ developer that’s looking to specialize as a embedded software developer, here are a couple of core competencies and guiding ideas that you can study, arranged in order of importance:

  • Understand the embedded systems basics
    Understanding the fundamentals of embedded systems and how they differ from general computing.

    • What are embedded systems? (Microcontrollers, sensors, actuators, etc.)
    • Key differences between embedded and traditional software development.
    • Real-time systems and their importance.

Recommended read/watch: “Introduction to Embedded Systems” by Jonathan Valvano (Textbook).

  • C++ for Embedded Systems
    Learning how C++ is used in resource-constrained environments.

    • Writing memory-efficient and performance-critical code.
    • Avoiding dynamic memory allocation (heap vs stack).
    • Using low-level hardware interfaces (registers, ports, etc.).

Recommended read/watch: “Embedded: Customizing Dynamic Memory Management in C++” by Ben Saks in CppCon 2020.

  • Learning Microcontrollers
    Gain practical experience with microcontrollers, one of the basic programmable elements in embedded development environments.

    • Introduction to microcontrollers (e.g., ARM Cortex, AVR, ESP32).
    • Setting up a development environment (IDE, toolchains).
    • Flashing code to the microcontroller.

Recommended read/watch: “C++ For Microcontrollers – Introduction”  by Mikey’s Lab

  • Optimization and Power Management
    Learn how to optimize embedded C++ code for performance and power consumption.

    • Code optimization techniques (e.g., loop unrolling, inline functions).
    • Power-saving modes in microcontrollers.
    • Balancing performance and power consumption.

Recommended read/watch: “Introduction to Embedded Systems” by Jonathan Valvano (Textbook).

  • Debugging Techniques for Embedded Systems
    Get a proper introduction to the  debugging techniques specific to embedded development.

    • Using in-circuit debuggers (ICDs) and logic analyzers.
    • Setting breakpoints, watching variables, and stepping through code.
    • Dealing with hardware-software integration bugs.

Recommended read/watch: Variety of courses from Feabhas

Visual Studio as the Go-To IDE

In embedded systems  C++ development, a few IDEs stand out for their ability to handle high-performance applications. CLion by JetBrains is popular for its strong code analysis and integration with CMake, supporting multi-platform projects. Its tools for memory profiling and real-time inspections are especially useful in financial software, where precision is key.

Eclipse CDT offers flexibility and powerful debugging features, with support for plugins and external tools like GDB. Its open-source nature makes it a cost-effective choice for developers aiming to optimize performance.

However, Visual Studio is the industry’s top choice, thanks to its advanced debugging tools like breakpoints and call stack analysis, essential for resolving issues in complex financial applications. For custom hardware, it’s common to only get Visual Studio support. It also offers code analysis, performance profiling, and cross-platform support, including Linux. These features make Visual Studio a comprehensive and scalable option, ideal for financial developers seeking reliability across multiple platforms.

Enhancing Productivity with Visual Assist

For C++ developers working in finance, Visual Assist is an indispensable extension that significantly enhances productivity. This powerful tool integrates seamlessly with Visual Studio, offering a range of features designed to make coding faster and more efficient.

A practical example of how Visual Assist can accelerate development is its Convert Pointer to Instance refactoring feature. In financial applications, optimizing memory usage is critical. This feature allows developers to easily convert heap-allocated pointers to stack-allocated instances, which can enhance performance and reduce memory overhead. By simplifying these refactoring tasks, Visual Assist helps developers focus on implementing and refining the core functionalities of their financial software. 

In summary, Visual Studio combined with Visual Assist provides a powerful toolkit for C++ developers in the finance industry, enhancing both the development experience and the quality of the final product.

Section 5: The Future of C++ in Embedded Systems for Finance

Emerging Trends

The integration of embedded systems into financial applications is becoming increasingly prevalent, driven by advancements in technology and the growing need for real-time data processing and enhanced security. Embedded systems, such as Internet of Things (IoT) devices and advanced security systems, are playing a crucial role in modern financial infrastructure. For example, IoT devices can provide real-time analytics and monitoring for financial transactions, while sophisticated security systems use embedded technology to protect sensitive data and prevent fraud. 

C++ is well-positioned to adapt to these emerging trends due to its versatility and efficiency. As embedded systems become more integral to financial applications, C++ continues to offer the performance and control needed to develop robust solutions. The language’s ability to interface directly with hardware and manage resources at a low level makes it ideal for embedded development, where precision and efficiency are paramount. Additionally, C++ is evolving to support new standards and libraries that enhance its capabilities for embedded applications, ensuring that it remains a key language in the financial sector’s future.

Preparing for the Future

To stay ahead in the field of C++ development for embedded systems, it is essential to engage in continuous learning and stay abreast of technological advancements. The financial sector is rapidly evolving, and developers must be proactive in acquiring new skills and knowledge to remain competitive. This includes familiarizing oneself with the latest developments in embedded systems, such as new IoT protocols and security technologies, as well as advancements in C++ standards and tools.

Leveraging new tools and technologies can also significantly impact productivity and reduce stress in high-pressure environments. For instance, adopting modern IDEs and development environments that offer powerful debugging, profiling, and refactoring capabilities can streamline the development process and help manage the complexities of embedded systems. Tools that automate routine tasks and provide advanced code analysis can save valuable time and reduce the cognitive load on developers, allowing them to focus on more strategic aspects of their work.

In summary, the future of C++ in embedded systems for finance looks promising, driven by the increasing integration of advanced technologies and the language’s continued evolution. By staying informed about emerging trends and adopting tools that enhance efficiency and reduce stress, C++ developers can position themselves for success in this dynamic and evolving field.

Conclusion

In this blog, we’ve explored the pivotal role of C++ in the development of financial software and embedded systems, highlighting its unmatched performance, reliability, and efficiency. We discussed how C++ meets the rigorous demands of financial applications by offering precise control over system resources and supporting complex, high-performance operations. Additionally, we examined the common challenges faced by developers, such as performance optimization and debugging, and how tools like Visual Studio and Visual Assist can alleviate these difficulties.

As financial systems continue to evolve and embedded systems become more integrated, C++ remains a critical language due to its adaptability and powerful capabilities. The language’s ability to deliver real-time processing and manage resources efficiently ensures its continued relevance in the financial sector.

We encourage readers to explore the benefits of Visual Studio and Visual Assist to enhance their development process. By leveraging these tools, developers can streamline their workflows, improve code quality, and handle the complexities of high-performance financial software more effectively. Embracing these technologies will not only improve development efficiency but also contribute to the creation of robust and reliable financial systems.

The post Getting started with how to use C++ for embedded systems in financial services first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/getting-started-with-how-to-use-c-for-embedded-systems-in-financial-services/feed/ 0 3919
The biggest challenges in writing C++ programs for finance and banking https://www.wholetomato.com/blog/the-biggest-challenges-in-writing-c-programs-for-finance-and-banking/ https://www.wholetomato.com/blog/the-biggest-challenges-in-writing-c-programs-for-finance-and-banking/#respond Wed, 28 Aug 2024 05:44:14 +0000 https://www.wholetomato.com/blog/?p=3899 Introduction When it comes to developing software for the finance and banking industry, C++ is often the language of choice due to its performance, efficiency, and flexibility. However, writing C++ programs in this highly regulated...

The post The biggest challenges in writing C++ programs for finance and banking first appeared on Tomato Soup.

]]>
Introduction

When it comes to developing software for the finance and banking industry, C++ is often the language of choice due to its performance, efficiency, and flexibility.

However, writing C++ programs in this highly regulated and fast-paced environment comes with its own set of challenges. From managing the complexity of legacy codebases to ensuring real-time performance for trading systems, developers face numerous hurdles. Regulations and stringent security measures, compliance with industry regulations, and the ever-present demand for high reliability and accuracy also compound this problem. 

In this blog, we will explore some of the biggest challenges C++ developers encounter when creating software solutions for the finance and banking sector.

Why use C++ in Financial Software

Banks and financial institutions are always looking to improve their trading infrastructure and upgrade their data-management capabilities. Having the best financial model mathematical models help generate profits and reduce risk in a highly volatile and time-sensitive market.

And it just so happens that C++, a low-level language, is the top choice due to its speed and efficiency, making it a preferred choice for high-frequency trading platforms, risk management systems, and other critical financial applications.

The challenges to becoming a programmer in the financial industry

When you’re a developer in the financial industry, it’s almost always a given that apart from being able to program, you would also be able to understand the math to validate various financial models. Some developers may also conduct research and hypothesize on new trading strategies themselves.

Becoming a quantitative analyst, bank developer, or high-frequency trader can be very lucrative career choices. However, it also means that there are stricter requirements and skill sets to be qualified.

As an aspiring developer, here are the key problems and frustrations that C++ developers in the financial industry should keep in mind:

Training requirements and developer skill set

  • Steep learning curve
    You can be a decent trader and a researcher using basic programming and scripting languages such as Python. But on the other hand, knowing C++ from just a broad level won’t be able to help you as much since you won’t be able to utilize the low latency advantages. If you really want to implement models and develop applications for the industry, there is a certain level of optimization skills you need first.
  • Understand modeling and simulations. It comes as no surprise, but there is a hefty amount of math involved in the financial industry. Financial algorithms can be mathematically intensive, requiring developers to have a strong understanding of quantitative finance and numerical methods.
  • Need to invest in skills other than programming? Developers often need to implement complex models that simulate market conditions or risk factors, which requires a deep understanding of both finance and C++. However, this is less of a problem if you’re working with a diversified team of developers, traders, and analysts.

Programming requirements: Performance Optimization

  • Low Latency Requirements
    Financial applications, especially in trading, require extremely low latency. Developers must continuously optimize their code to reduce execution time to microseconds or even nanoseconds.
  • Resource Management
    Efficient memory management is crucial—each unoptimized bit of code can amount to micro delays that can be the difference between a winning and a losing trade. C++ developers need to carefully manage resources, avoid memory leaks, and ensure optimal memory performance in their code.
  • Accuracy and code correctness: Financial applications often rely on parallel processing to handle large volumes of data. The source code and the project itself may not be massive, but the intricacies involved must be accurate because of the sensitive nature of market prices. Still, managing developer mistakes and errors in C++ can be challenging and error-prone.

Programming requirements: Performance Optimization

  • Low Latency Requirements
    Financial applications, especially in trading, require extremely low latency. Developers must continuously optimize their code to reduce execution time to microseconds or even nanoseconds.
  • Resource Management
    Efficient memory management is crucial—each unoptimized bit of code can amount to micro delays that can be the difference between a winning and a losing trade. C++ developers need to carefully manage resources, avoid memory leaks, and ensure optimal memory performance in their code.
  • Accuracy and code correctness: Financial applications often rely on parallel processing to handle large volumes of data. The source code and the project itself may not be massive, but the intricacies involved must be accurate because of the sensitive nature of market prices. Still, managing developer mistakes and errors in C++ can be challenging and error-prone.

Programming requirements: Compliance and Regulations

  • Compliance with regulations
    Apart from being mathematically complex enough as it is, financial software must comply with stringent regulations within the company and the government. Developers need to ensure that every bit of their code adheres to compliance requirements—this can vary by region and change frequently.
  • Auditability
    The code must be auditable, meaning that it should be easy to trace and understand how financial decisions are made by the software, which adds another layer of complexity.
  • Vulnerability Management
    There are many available libraries and third party extensions for C++ developers. Developers, however, need to stay on top of potential vulnerabilities in C++ libraries or the codebase itself to prevent exploits.

Tips for facing these challenges

  • Study the math, polish your C++
    As mentioned earlier, you can be a pure developer and just implement whatever algorithms that are supplied to you. But to become a better analyst and interpret trends yourself, you need to equip yourself  with more than programming skills.If you’re looking to familiarize yourself with the concepts, there are many great resources available such as Investopedia. For specific use cases or general C++ skills, a good old reference book (such as those from Scott Meyers or one from Bjarne Stroustrup himself) will always be great options.For references regarding high performance C++, there are also great resources online such as:

  • Invest in understanding above and beyond your tasks

Banks and financial institutions, especially top ones, will only hire the cream-of-the-crop devs. Average devs with pedestrian level finance knowledge will be less appealing for the simple fact that for an expensive role, financial firms expect the maximum returns. 

This often means that being a financial developer entails learning and understanding current market trends, calculating opportunity costs, and economic theories yourself—not just the technical aspects of implementing them into an algorithm.

  • Get all the help you can

Take note of tidbits of knowledge you’ll pick up on the spot from existing codebases accessible to you. Colleagues may also come to you directly and give you advice on how best to tackle certain financial puzzles.

As for developer tools, they are oftentimes underestimated in terms of how helpful technology can be when you’re developing software and finance algorithms. Having a conducive and smart development environment can be the small difference between a timely implementation hauling your company massive profits, or an unfortunate missed opportunity.

Try to invest in software that allows you to focus and concentrate on the core work such as thinking and planning. For example, there are many productivity tools available online that seek to help developers monitor their code’s quality. There are also tools that help in maintaining or refactoring code bases. These are all tools that can help you stay on the cutting edge.

Protip for those coding in Visual Studio C++

Visual Studio remains the premier IDE for C++, especially serious C++ programming such as financial services. That includes deploying to Linux. Visual Studio is a robust IDE for developing C++ financial programs because it offers powerful debugging and code analysis tools, which are crucial for maintaining high-quality, error-free code in critical financial applications, plus strong performance and profiling tools. 

It provides extensive support for modern C++ standards and libraries, ensuring compatibility and performance optimization. The IDE integrates well with various version control systems, enabling smooth collaboration and code management among development teams. Additionally, Visual Studio’s extensive ecosystem of extensions and plugins allows developers to customize their environment to fit specific financial industry requirements.

There are general plugins that augment the entire IDE with faster processes and more intuitive workflows. For example, Visual Assist, one of the most popular VS extensions, provides faster ways to navigate projects, convenient one-click solutions to maintaining code, and additional syntax support not available in the default VS IDE. Here are some specific features:

When writing high performance C++ you’ll find yourself doing things like (for example) avoiding memory allocation, and Visual Assist’s set of refactorings can assist with all sorts of work that can move code around to assist your improvements. A trivial example is converting a heap-allocation to a stack allocation via the Convert Pointer to Instance refactoring.

You can’t underestimate how helpful it is especially in a high-stress and time-sensitive profession.

Those jobs are high stress and lots of crunch is expected. Our navigation features get you around much faster than the built in tools Open File in Solution, Find Symbol in Solution and Find References just works that much better and faster.

Conclusion

Becoming a programmer in the financial industry is no small task. There are many significant challenges presented to you both as a programmer and as a learner.  It is a constantly evolving profession—like a perpetual hackathon. You have to stay on top tech and industry trends to ensure your company is getting the best results it can. 

Study beyond your delegation. Utilize all the tools at your disposal. And most importantly, persevere. 

The post The biggest challenges in writing C++ programs for finance and banking first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/the-biggest-challenges-in-writing-c-programs-for-finance-and-banking/feed/ 0 3899
Installing Virtual Machines to use Visual Studio on Mac https://www.wholetomato.com/blog/installing-virtual-machines-to-use-visual-studio-on-mac/ https://www.wholetomato.com/blog/installing-virtual-machines-to-use-visual-studio-on-mac/#respond Tue, 27 Aug 2024 14:58:10 +0000 https://www.wholetomato.com/blog/?p=3868 There are many options for IDEs for developers who are working on a Mac; however, there may still be use cases and instances where the available options are insufficient. For example some projects and client...

The post Installing Virtual Machines to use Visual Studio on Mac first appeared on Tomato Soup.

]]>
There are many options for IDEs for developers who are working on a Mac; however, there may still be use cases and instances where the available options are insufficient. For example some projects and client requirements may dictate the use of Microsoft’s Visual Studio (VS)—which is predominantly designed for the Windows OS.

As a workaround for, what most Mac users have done (and is one of Microsoft’s recommendations) is to install a Virtual Machine on ARM Macs to emulate a Windows environment and use Visual Studio from there.

This guide will walk you through the entire process of installing and using Visual Studio on a Mac, with a special mention of a handy productivity plugin you can add to make its performance closer to a natively-installed app.

The Different Visual Studios on Mac

Before you dive right in, here’s something to consider before you install: there are similarly named versions of Visual Studio—and you need to know which one you are looking for.

The first one is the native app “Visual Studio 2022 for Mac” (VS 2022 Mac). The naming scheme is how Visual Studio Code is to Visual Studio—they’re two completely different products that confusingly share a similar name.

The native Visual Studio for Mac is largely based on Xamarin, another cross-platform framework for building native mobile apps on iOS, Android, and Windows. It is primarily used for C# or .NET development. Consequently, Visual Studio for Mac is also used primarily for C# development. 

VS 2022 Mac has been discontinued in favor of “Visual Studio Code” (VSC) for Mac. You can use Microsoft’s VSC with the new C# Dev Kit and related extensions in lieu of VS 2022 Mac. The caveat is that VSC may not be enough for C++ developers, or for C# developers who rely on VS’s frameworks and libraries for their app or program development needs.

Fortunately, if you’re opting for VSC on Mac, it may be good to know that there is less discrepancy between the Windows and Mac versions of VSC—just a few keystroke and shortcut differences. 

To summarize, here are the Visual Studios that you can use on Mac:

  • Visual Studio 2022 for Mac — the Xamarin-like native app
  • Visual Studio Code Mac — the VS text editor but on mac. Almost the same as Windows version
  • Visual Studio Code Windows — the VS text editor
  • Visual Studio Windows — the native Windows version (in our case, installed on a virtual machine)

Of course users can also opt to use alternative IDEs. In this blog, however, we will be teaching you how to install the second option—a virtual machine on your Apple-silicon Mac and installing Visual Studio, the complete Windows version, (VS for Windows on a VM) thereafter.

 

Why you may need Visual Studio for Windows on Mac

The primary reasons to use Visual Studio for Windows on a Mac are the following:

  • Maintain compatibility with Windows-based projects
  • Rely on certain features that are Windows version of Visual Studio-specific
  • developing .NET applications
  • working with Azure
  • or integrating specific third-party tools that are Visual Studio-specific
  • You use a Mac!

Other considerations may be based on developer preferences such as those who like Visual Studio for tasks like debugging complex applications, managing large solutions, or using specialized extensions that are only available on the Windows version.

For game developers using Unity, Xamarin developers building cross-platform mobile apps, or .NET developers focusing on backend and cloud development, using a VM allows you to retain access to the full suite of Visual Studio’s tools. This includes robust debugging features, integrated version control with Git, and comprehensive support for a variety of programming languages and frameworks.

Prerequisites for Installation

Visual Studio 2022 has official requirements which you can read here. However, we can summarize for some advice:

  • It runs on both Intel and ARM computers
  • You will need to install either the Intel or the ARM version of Windows. You can’t run the Intel version of Windows on an ARM Mac, not even in a VM. The ARM version of Windows runs Intel apps just fine, even including using a debugger, in our experience.
  • Dedicate lots of RAM and multiple cores to your VM. We recommend a minimum of 4GB of your host Mac’s RAM goes to the virtual machine. In general, the beefier a machine is in terms of RAM and cores, the more VMs you can run at once.
  • While you can use an old Intel Mac, the Apple Silicon ones are very performant and we strongly recommend using an M-series ARM Mac. Any of them. They’re all good.

If you’ve never used a virtual machine for development before, you might be worried about performance – after all, it’s not running directly on the hardware, right? In practice, this is not an issue. Modern CPUs have inbuilt support for running virtual machines and your VM is not emulated; it runs code directly on the CPU just like the host operating system does.

The biggest mistake people make is not giving a VM enough RAM or dedicated CPU cores. Run on a powerful machine and configure the VM to a couple of cores minimum, and at least 4GB of RAM minimum. If you do heavy computation on the VM (building large projects, etc) increase that. Make sure you have a host machine powerful enough that if you allocate, say, half its resources to the VM then both have enough resources to run. A Macbook Air has 8 cores, so you can allocate 2 to 4 to the VM; if you have 16GB of RAM, you can allocate 4GB to the VM and leave macOS 12GB. This kind of setup works well.

Before diving into the installation process, ensure your Mac meets the following requirements. To summarize, however, any recent computer will meet them in terms of performance—the more important components to consider are available RAM and disk space

Hardware Recommendations:

  • Processor: Modern M-series (Apple Silicon) or Intel processors are more than capable of handling Visual Studio within a VM.
  • RAM: Minimum of 4 GB (16 GB recommended for typical professional solutions).
  • Hard Disk Space: Minimum of 850 MB up to 210 GB of available space, depending on the features installed (20-50 GB of free space is typical). Installing Windows and Visual Studio on a solid-state drive (SSD) is recommended for increased performance.

By following the recommended setup, you’ll meet or exceed the necessary hardware requirements, making your development experience seamless even within a virtualized environment.

Step-by-Step Installation Guide

In this guide, we’ll walk you through installing Visual Studio on your Mac using a virtual machine. Since Visual Studio is no longer natively supported on macOS, setting up a virtual machine (VM) is the best approach to ensure you have access to the full range of Visual Studio features. Below, we’ll outline the steps using Parallels Desktop, a popular VM software for Mac.

Step 1: Choosing Your Virtual Machine Software

Before installing Visual Studio, you need to set up a virtual machine that runs Windows on your Mac. Here are some of the top options currently available:

  • Parallels Desktop: Known for its seamless integration with macOS, Parallels is user-friendly and optimized for running Windows on Apple Silicon (M1/M2) and Intel-based Macs.
  • VMware Fusion: A robust alternative to Parallels, VMware Fusion offers advanced features and supports a wide range of operating systems.
  • VirtualBox: An open-source option that is free to use, though it may require more manual configuration and might not offer the same level of performance as Parallels or VMware Fusion.

For this guide, we’ll focus on Parallels Desktop. This is the officially supported way Microsoft offers to run Windows on a modern ARM Mac.

Step 2: Installing Parallels Desktop

  1. Download Parallels Desktop:
    • Visit the Parallels Desktop website.
    • Click on the “Try Now” or “Buy Now” button, depending on whether you want a trial or full version.
    • The installer file will start downloading.
  2. Install Parallels Desktop:
    • Open the downloaded .dmg file.
    • Drag the Parallels Desktop icon to the Applications folder.
    • Open the Applications folder and double-click the Parallels Desktop icon to launch it.
    • Follow the on-screen instructions to complete the installation. You may need to grant permissions and sign in with a Parallels account.
  3. Set Up a New Windows Virtual Machine.

    • When you first launch Parallels Desktop, it will prompt you to set up a new VM.
    • Choose to install Windows from an ISO image file or from an existing Windows installation disk
    • Parallels may also offer the option to download and install Windows directly, streamlining the process

    • Follow the prompts to complete the Windows installation. This process may take some time as Windows sets up.
  • installing windows on a virtual machine
install confirmation for visual studio on virtual machines

If you’ve done everything correctly, you will get to this confirmation screen.

Step 3: Downloading Visual Studio for Windows

Now that you have Windows running on your Mac via Parallels, you can proceed with installing Visual Studio.

  1. Download Visual Studio:
    • Within your Windows VM, open a web browser and visit the Visual Studio download page.
    • Choose the edition of Visual Studio you want to install (Community, Professional, or Enterprise).
    • Click the “Download” button to start downloading the installer.

      Note: This is open on a browser window inside the VM.

       

  2. Install Visual Studio:
    • Once the download is complete, open the installer file.
    • Follow the on-screen instructions to select your workload preferences (e.g., C++ desktop development, .NET desktop development, ASP.NET and web development, game development with Unity).
    • Click “Install” to begin the installation. This process may take some time, depending on the selected workloads and your internet speed.
    • After installation, launch Visual Studio from the Start menu within your Windows VM.

      visual studio on a mac

Step 4: Setting Up Your First Project

  1. Once Visual Studio is installed, open it and select “New Project.”
  2. Choose the type of project you want to create (e.g., Console App, Web App, Mobile App).
  3. Follow the prompts to configure your project, including setting the project name and location.
  4. Click “Create” to generate your new project.
    Visual Studio open on a virtual machine

Using Visual Studio for Windows on Mac: Navigating the Interface and optimizing the Virtual Machine

When running Visual Studio on a virtual machine (VM) on your Mac, there are some key differences and considerations to keep in mind to ensure a smooth development experience:

Keyboard Shortcuts

Running Visual Studio in a VM can result in some keyboard shortcuts behaving differently than they would on a native Windows PC. This is due to differences in how macOS and Windows handle certain key combinations. Here are a few tips:

  • Cmd vs. Ctrl Mapping: Parallels lets you map macOS shortcuts to their Windows equivalents, enabling the use of familiar macOS commands like Cmd+C for copy and Cmd+V for paste in your Windows VM. This can be configured under Devices & Sound > Keyboard by enabling the “Use macOS shortcuts” option.
  • Function Keys in Parallels: Adjust the behavior of function keys (F1-F12) in Parallels to operate as standard function keys for Visual Studio commands. Access these settings via Parallels Desktop > Preferences > Shortcuts or under Devices & Sound > Keyboard for your VM.
  • Customize Mac System Settings: Alternatively, modify your Mac’s System Preferences > Keyboard by checking “Use F1, F2, etc. keys as standard function keys.” This avoids needing to press the Fn key when using function keys in Visual Studio within a Parallels VM.
  • Customizing Shortcuts in Visual Studio: If certain shortcuts aren’t functioning as expected within the VM, customize your keyboard shortcuts directly in Visual Studio under Tools > Options > Environment > Keyboard.
  • Windows Apps on the Mac Taskbar: Parallels can display Windows apps on the Mac taskbar. If you prefer a cleaner interface, disable this feature in Parallels settings to avoid taskbar clutter.
  • Folder Sharing in Parallels: Parallels shares many folders between your Mac and the VM by default. For increased privacy or security, customize sharing options to limit access to specific folders, such as only sharing your Downloads or a dedicated project folder.
  • System Resource Allocation: Optimize CPU, RAM, and disk space allocation for your VM based on your workload. Proper allocation ensures both your Mac and the VM perform smoothly during demanding tasks like code compilation.

Display and Resolution

When running a VM, Parallels offers different display modes to suit your workflow:

  • Fullscreen Mode: Parallels can run your VM in fullscreen, integrating it seamlessly into your Mac’s desktop environment. You can use macOS Spaces to switch between your VM and other macOS apps effortlessly.
  • Windowed Mode: If you prefer to keep your VM contained, Windowed mode lets you run Windows inside a resizable window on your desktop. This can be useful for quickly accessing other macOS applications without losing sight of your VM.
  • Coherence Mode: This mode allows Windows applications to appear alongside macOS apps on your desktop, blending the two environments. While it looks impressive, it can sometimes cause graphical glitches. In my experience, it’s a neat marketing feature, but not always practical for everyday use. However, some users find it very effective for their needs, so it’s worth experimenting with if you’re curious.

Parallels generally sets up Windows with the correct DPI settings automatically, so display resolution issues are rare. Adjusting these settings usually requires deliberate changes, making it easy to maintain a crisp and consistent interface across your VM and macOS.

Enhancing the Experience with Visual Assist

Developing on a VM can present unique challenges, but with the right setup and a few tweaks, you can create a development environment that’s nearly as effective as working on a native Windows machine. By paying attention to how keyboard shortcuts behave, optimizing performance settings, and ensuring good network connectivity, you can make the most out of Visual Studio in a virtualized environment on your Mac.

Visual Assist, renowned for its powerful productivity features, is now fully supported on ARM devices, including Macs with Apple Silicon (M1, M2, etc). Here’s how to install it:

  1. Initiate the Virtual Machine environment. Launch whatever VM you installed.
  2. Open Visual Studio on your Mac: Launch the Visual Studio application to begin the installation process.
  3. Navigate to Extensions > Manage Extensions: In the top menu, click on “Extensions,” then select “Manage Extensions” from the dropdown. This will open the Extensions Manager window.
  4. Search for “Visual Assist” and click “Install”: In the Extensions Manager, use the search bar to find “Visual Assist.” Once located, click the “Install” button next to the extension. You can also download it straight from the VS marketplace. The installation process will begin automatically.
    Visual assist ARM support
  5. Restart Visual Studio to enable the extension: After installation, restart Visual Studio to activate Visual Assist. Once restarted, you will have access to all the powerful features Visual Assist offers.

Benefits of Visual Assist on ARM Devices

With Visual Assist enabled on ARM devices, Mac users can experience a significant boost in productivity and code quality. Here are some of the key benefits:

  • Full ARM support. Visual Assist added ARM support which includes Mac silicon-based devices. For those using VMs, this is one of the best workarounds to getting a better VS experience.
  • Enhanced Code Navigation: Quickly jump to definitions, references, and symbols within your codebase. This feature allows you to navigate complex projects with ease, reducing the time spent searching for specific code elements and improving overall efficiency.
  • Refactoring Tools: Easily refactor code with powerful tools like Rename, Encapsulate Field, and Extract Method. These tools help maintain clean and organized code by automating common refactoring tasks, making it easier to implement changes and ensure code consistency.
  • Code Assistance: Improved IntelliSense with better suggestions and real-time error checking. Visual Assist enhances IntelliSense by providing more accurate and context-aware code completions, helping you write code faster and with fewer errors. Real-time error checking also helps you catch and fix issues as you code, reducing the likelihood of bugs in your final product.
  • Performance Optimization: Visual Assist is optimized for ARM architecture, ensuring smooth and efficient performance on M1 and M2 Macs. This optimization takes full advantage of the advanced capabilities of Apple Silicon, providing a responsive and lag-free development experience even for large and complex projects.
  • Advanced Code Analysis: Visual Assist includes advanced code analysis tools that help you understand and improve your codebase. These tools identify potential issues, suggest improvements, and provide insights into code complexity and maintainability, enabling you to write high-quality code.
  • Customizable Shortcuts and Commands: Tailor your development environment to your workflow by customizing shortcuts and commands. Visual Assist allows you to configure key bindings and commands to suit your preferences, making it easier to access frequently used features and streamline your coding process.
  • Seamless Integration with Visual Studio: Visual Assist integrates seamlessly with Visual Studio for Mac, providing a cohesive and unified development experience. The extension works alongside other Visual Studio tools and features, enhancing the overall functionality of the IDE without disrupting your workflow.

By leveraging the capabilities of Visual Assist on ARM devices, you can significantly enhance your coding experience on Mac. Whether you’re working on small projects or large-scale applications, Visual Assist provides the tools and features you need to be more productive and write better code.

Conclusion

Congratulations! You have successfully installed Visual Studio on a virtual machine. You should now be able to develop as you would normally on a native Windows device. Tech is always changing and we can assume that with advances in both hardware and software, we will be able to utilize more performance with less in the future—but for now, enjoy your new virtual machine, fit with a fresh install of Visual Studio! Happy coding.

The post Installing Virtual Machines to use Visual Studio on Mac first appeared on Tomato Soup.

]]>
https://www.wholetomato.com/blog/installing-virtual-machines-to-use-visual-studio-on-mac/feed/ 0 3868