100G is not quite state-of-the-art anymore, but it is what is being deployed. This chip supports 10G ports, and can do 400Gbps through it, but does not support 100Gbps front-panel ports from what I can see.
There are field trials and pre-prod switch/route chips and optical/DWDM gear that support 400G Ethernet using 56Gbps SerDes. This chip uses older 12.5Gbps SerDes, current-gen 100Gbps stuff uses 28Gbps SerDes.
100G has been around for a while and not that common in usage, other than extreme cases. 10g and 40G are still the most widely used ports. Considering that most switches and routers have a minimum 10-20 ports (10g-40g), that's already a stupendous amount of BW - and on routers, you can always add more cards. 100g routers/switches are very expensive.
Unless you are pushing google/facebook/Comcast level traffic - there are very few use cases. Apparently, Google/Facebook uses their own network hardware.
This isn't really true in the market today. 100G switches have emerged at roughly the same cost per port as 40G from a couple of years ago and, indeed, can often accept both 40 and 100 gig optics. Even at list price the cost per port for 100G switching has been under $1K for more than a year.
As a result it's actually fairly common to find new DC fabrics (read: inter-switch connections, not end hosts) being built with 100G because there's no significant economic disadvantage to doing so. That said, the pricing for inter-site 100G is still high enough that it hasn't commonly made its way to smaller organizations.