One of the big reasons why I quit R 10 years ago and never looked back - Python wasn't secretely converting anything, AND failing silently when its not the expected type.
R's come a long way in the last decade. The tibble and data.table packages both address this issue. data.table (https://github.com/Rdatatable/data.table) is the more strongly-typed of the two, by default it fails loudly when it encounters data that doesn't conform to the column type. It's also quite fast--binds to C code that parallelizes with OpenMP. It has very terse and expressive syntax, I find it so much more intuitive and easy to work with than pandas.
If you're happy with Python, by all means keep using it. I use both languages. Just suggesting that if you gave up on R that long ago, you might be pleasantly surprised by how much better it's gotten since then.
vctrs [0] is the latest effort by the Rstudio developers to help people write type-stable code. The R standard library has a lot of issues with silently casting types, but the wonderful thing about it being so scheme-like is that many of these things can be evolved through libraries.
Exactly! TensorFlow is the only Python data analysis package I've used that doesn't automatically convert things in the background. I was helping a friend with STATA the other day, which doesn't automatically convert, and I realized I've gotten so used to that behavior in R and Python.
i stopped using R a number of years ago because it is not useful for very large datasets (in the tens to hundreds of millions) and now use kdb almost exclusively.
R has quite a few specialized libs to deal with large datasets (out of memory). Nothing keeps you from hosting the data in a DBMS and using SQL (or dplyr) to pull the data in an appropriate format.
I run a data science department at a corporation and this is exactly how we handle our massive amounts of data. It's rare that we're using a billion+ data points in one model so we use SQL to get the data we need in the format we need and move forward from there in R.