Dependent types are already very useful. If we're willing to avoid formal verification, then we don't need a full correctness proof for our application; in which case, there's no need to avoid dependent types just because they don't scale. They still allow us to prove correctness for parts of an application (e.g. a tricky, performance-sensitive algorithm), which is better than nothing.
In particular, we can use dependent types to place much tighter constraints on critical pieces of code, but 'cut short' a full correctness proof by simply punting to an error case when it's convenient.
For example, say we need to manipulate the Nth element of a collection. We can express indices with a type 'Fin X', whose values are equivalent to 1, 2, ..., X, e.g.
f1 : (c: List Foo) -> (n: Fin (length c)) -> BAR
Within this 'f1' function, we know that the index 'n' will appear in the list 'c' (as a corollary, we also know that 'c' is non-empty, since empty lists have no indices!); this lets us specify the intended behaviour more precisely, lets us satisfy the requirements of other functions we may want to call, lets the compiler spot more problems for us, perhaps avoids the need for certain bounds checks, etc.
Yet this places a burden on anyone who calls this 'f1' function: they need to prove that the index appears within the list (and, in doing so, prove that the list is non-empty). This might be very difficult in the general case, yet a full correctness proof would require it.
However, we don't need a full correctness proof; so we can just check if we're in bounds, and return an error if not:
f2 : List Foo -> Int -> Either String BAR
f2 c i = case mkFin (length c) i of
Just i -> Right (f1 c i)
Nothing -> Left ("Index " + show i + " not in list of length " + show (length c))
where mkFin : (n: Nat) -> (i: Int) -> Maybe (Fin n)
mkFin Z _ = Nothing
mkFin _ i | i < 0 = Nothing
mkFin (S n) 0 = Just fZ
mkFin (S n) i = map fS (mkFin n (i - 1))
Calling a function like 'f2' may end up hitting the error case, so this doesn't prove our entire application is correct; but we can still prove all sorts of things about 'f1', and hence have a lot of confidence in what happens whenever this initial error branch is avoided. Also, anticipating an error up-front is usually preferable to hitting one half-way through processing.
Dependent types are really useful for propagating error conditions 'backwards' through the code like this. When practical, I much prefer this to propagating error conditions 'forwards', by returning Maybe, etc. Most applications start with some sort of parsing step, which is a great place to perform such checks, if we can manage to propagate them back that far. Otherwise, we just handle these error cases just like we would any others (dependent types or not).