Can you elaborate on what you worked on? Was it part of academics or industry? I guess it was embedded systems, as these domain seems pretty open to formalization (as they are pretty low-level), as they often demand high reliability.
Most of my formal work was for distributed systems. Some of it was for video transcode and packaging work. All of it was for industry.
For example, I implemented a distributed adaptive bitrate video packager whose core synchronization algorithm I needed to both develop, and then for my own comfort, formally prove. I did a proof of correctness via exhaustion and a proof that it achieved consensus in a minimal number of steps by induction.
This is pretty typical for any "distributed" algorithm because intuiting correctness for concurrent and distributed stuff is ... really hard. That's why people prove garbage collection algorithms.
When designing large distributed systems a thorough understanding of statistics is required to understand workloads. How many transactions per second should this micro service support? Better pull out that Poisson distribution. What cache size do I need? Better grab a zipf distribution, some sample data, and R. Want to understand the interplay of several factors on workload? I hope you are comfortable with multivariable calculus.
I don't face problems like these daily, but when I do I'm fucking glad for every inch of math I know. Which to be honest is still probably not enough.