Not always. Dependencies were a huge problem at Google, even in C++ (perhaps especially in C++), because they mean that the linker has to do a lot of extra work. And unlike compiling, linking can't be parallelized, since it has to produce one binary. At one point the webserver for Google Search grew big enough that the linker started running out of RAM on build machines, and then we had a big problem.
There's still no substitute for good code hygiene and knowing exactly what you're using and what additional bloat you're bringing in when you add a library.
That's a pretty significant special case though. I'd be willing to go with the advice "If you get as big as Google's codebase, be sure to trim the dependencies on your statically-bound languages too." But you probably have a ways to go before that's an engineering concern for your project.
(... note that one could make a similar argument for more runtime-dynamic languages. I won't disagree, other than to observe that as a lone engineer, I've managed to code myself into a corner with dependencies in Rails ;) ).
The amount of time I've seen wasted trying to scale to Google is insane. People should worry about what Google does when they work for at least a billion dollar company.
For most projects import as many dependencies as you can as you are getting free labour. Sure, once in a while you'll fuck something up and waste a week or two, but it pales in comparison to the months you didn't spend reinventing the wheel.
No one really ever notices that it's all the companies with boat loads of cash that have massive technical debt. Even with the example at Google the first thing I'd try is jamming more memory in those machines, keep going until the linker needs more than 256 GB.
Fuck, Facebook still uses PHP, the stock market doesn't seem to care.
Was it running out of memory because of templates? For instance, parts of boost like boost serialization generates an obscene amount of symbols due to the way they do metaprogramming.
Templates were a problem but not a huge one. They aren't used extensively in the webserver, and in any case they bloat compile-time moreso than link-time.
The bigger problem was that we'd adopted a dependency strategy of "lots of little libraries" instead of "one big library with lots of source files". This offloads a lot of the work from the compiler to the linker. There are various advantages of this strategy - it speeds up incremental rebuilds, it encourages you to explicitly track all your dependencies, it simplifies IWYU, it's easier to parallelize - but linker RAM usage is not one of these advantages.
It was debug builds, which put an additional strain on the linker in that they keep around all the debug symbols. Regular builds were slow but manageable, so it's not like we couldn't release or develop, but it meant tracking down any sort of crash or serious bug became very difficult until we got the dependencies under control.
I forget the exact compiler settings - wasn't my department - but I think it included link-time optimization, and also FDO.
Ran out of RAM or address space? I've run out of RAM trying to ridiculous stuff like native builds on tiny embedded systems (due to whack code bases that wouldn't cross compile). Though, in the end I overcame this with even more perverse solutions like adding swap space via USB1 flash.
That aside, that is the third time in a couple of months that I have heard people mention situations where they ran out of RAM without explaining why swap could not at least work as a stop gap measure.
Ran out of RAM for cloud builds - Google generally does not use virtual memory for anything in the cloud, because it leads to unpredictable, massive delays which can cause cascading failures in services.
It was still "possible" to build on your workstation, but the locality patterns in linking and subsequent thrashing made this extremely an extremely small value of "possible". I recall once during this period I kicked off a local debug build on my workstation on Friday afternoon, went home for the weekend, and it was still running when I got into work on Monday morning. By Tuesday, I had given up and killed it.
There's still no substitute for good code hygiene and knowing exactly what you're using and what additional bloat you're bringing in when you add a library.