Julia looks resembles modern Fortran in some ways (no semicolons or curly braces, array indices start at 1, built-in array operations), and several of the tips in the article amount to writing Julia like Fortran. So maybe one should write a computational kernel in modern Fortran.
"the type of the returned value of a function must depend only on the type of the input of the function and not on the peculiar value it is given." Fortran mandates this.
"Another common error is changing the type of a variable inside a function." Static typing, as in Fortran, disallows this.
"When a function is in a critical inner loop, avoid keyword arguments and use only positional arguments, this will lead to better optimisation and faster execution." I doubt this will matter for a compiled language.
"Avoid global scope variables." For a compiled language, it is unlikely to matter.
It's not an accident really, dynamic dispatch is slow but expressive, static dispatch is fast but restrictive. Julia (as a language trying to solve the two-language problem) is designed to be approachable to Fortran programmers, so writing it like Fortran will work (and it will be just as fast because static is fast), and for Python programmers (which will lead to slower but highly dynamic code). Idiomatic Julia is actually a middle ground though, you always program at the highest level possible (you don't assume types, you assume behaviors, as the language is fundamentally duck typed like Python). The difference to CPython though is that the Julia compiler will generate one optimal static implementation for each possible argument combination that is used, instead of one dynamic implementation (as long as each implementation is not so expressive that can't be represented through a static code, which usually means depending on runtime information).
About the third point, it really won't matter in most other languages since in this case the Julia compiler is actually optimizing beyond those thanks to the multiple dispatch paradigm (in some cases it will even replace the function call directly with the result if it can solve it entirely with compile time information). And global variables in general can't be optimized by the compiler since they can be modified at any point, so it can't make any assumptions about it.
The nice thing, is that as a julia programmer get to choose how 'fortran-like' things get. You often start off writing naïve dynamic code that is more like Python or Lisp, and then identify performance bottlenecks and optimize those. This prescription is familiar to Python programmers except the 'optimize this bottleneck' step involves leaving CPython and using Cython or Numba or whatever.
In julia, you optimize your code by just writing slightly uglier julia code, not leaving the language. This has huge implications for
1) People 'looking under the hood'. Instead of having to read C as well as Julia, they only need to read Julia (albeit, more advanced, ugly technical julia code). This tends to greatly lower the bar for a julia user to become a julia developer and actively encourages learning more about the language.
2) Programs 'looking under the hood'. Code modification, injection and re-use is really common in julia and is part of what makes writing it so magical. Part of that is multiple dispatch code: https://www.youtube.com/watch?v=kc9HwsxE1OY but another major thing this enables is prevalent language wide automatic differentiation tools. Julia users tend to take it for granted at this point that they can take just about any function which takes in a number and spits out a number and then take a derivative of that code at compile time, no matter if that function was ever written with auto-diff in mind or if it ever knew about tools like Zygote or Forward diff. This sort of stuff would not be nearly as powerful in a language which did a lot of calling out to other languages to achieve performance.
The entire idea of julia is trying to get as much expressiveness and dynamism as possible while still allowing for C/Fortran levels of speed.
For me the main performance problem with Julia is how long compilation takes. Yes, using it only in conjunction with Jupyter Notebook avoids the issues for the most part, but writing large programs across multiple files that way is so cumbersome(if even possible) I won't even try.
A program that otherwise takes 4 seconds to compile and run takes 30 seconds when I also let it export a diagram, because each time I run "julia file.jl" it recompiles the plotting library.
If there is a way to have a Julia runtime in the background and letting it execute iterations of the program I haven't found it yet.
The easiest solution to have Julia on the background is using Revise.jl. Basically you import your program on the REPL and it will automatically track any change to the source code and precompile and update the environment as soon as you save it (and any function you run on the REPL, including slow to compile such as plotting, will be faster on the second time forward). You can combine it with other useful REPL modules such as Rebugger.jl, OhMyREPL.jl and Infiltrator.jl which you can import automatically in the startup.jl file.
The compiler team is prioritizing the compile-time latency and error messages for the 1.4 release, which if they succeed will make the language more approachable for people who use the same workflow as other dynamic languages (running the code from the shell).
Another user suggested Revise.jl, but another option is to just reduce the amount of compilation work you have julia doing if your script isn't crazy performance sensitive. You can play around with command line arguments to julia like `--compile=min`. Here's an example:
~ time julia -e "using Plots; x = 0:0.01:10; plot(x, sin.(x))"
real 0m18.050s
user 0m17.466s
sys 0m0.321s
~ time julia --compile=min -e "using Plots; x = 0:0.01:10; plot(x, sin.(x))"
real 0m4.262s
user 0m3.706s
sys 0m0.263s
"the type of the returned value of a function must depend only on the type of the input of the function and not on the peculiar value it is given." Fortran mandates this.
"Another common error is changing the type of a variable inside a function." Static typing, as in Fortran, disallows this.
"When a function is in a critical inner loop, avoid keyword arguments and use only positional arguments, this will lead to better optimisation and faster execution." I doubt this will matter for a compiled language.
"Avoid global scope variables." For a compiled language, it is unlikely to matter.