Using an algorithm discovered in 2019 (cf. Harvey and van der Hoeven), it's possible to multiply in O(log n * log log n), if you have a lot of time. This is presumably optimal.
Sure, or you just define the input (the n) to be the number of digits in (the logarithm of) the number you’re summing up to. That’s generally how people talk about big O notation when it comes to basic arithmetic operations, mostly because we’re usually talking about computers with constant-time instructions for basic arithmetic operations (granted, only up to the maximum integer or float supported by the software).