So does my ant colony analogy apply here? When do we get to move into our hexagonal cell I mean house? You know, bees use hexagons because they are the most efficient use of space.
But please do keep using the "redneck" trope. We're all uneducated hillbillies that obviously aren't as smart and sophisticated as everyone else.
I like both JIRA and Confluence. I think many bad feelings towards JIRA are actually related to bad configuration. Like customizing the issues to add too many mandatory fields or creating a process that does not really match how people are working in practise.
JIRA configuration is pain. Unless you are large enough to afford a dedicated JIRA admin, it would likely make sense to just pay for somebody once in a while to implement the stuff you want.
All my bad feelings are almost completely related to the horrible search, terrible UI and the hilariously bad performance of the app. The web app has two completely different markdown adjacent formats that are not compatible with each other so you get text rendered one way when you create a ticket and another when you edit it later. Search is almost never helpful beyond finding recent tickets. Multiple pages take so long to load abs the new UI is so laggy and busted
- default text size is too small, text block widths are often not constrained enough getting the "slashdot unreadability effect"
- 0.5s+ lag when clicking any textbox
- 1s+ lag when clicking any dropdown
- 1s+ lag delay when grabbing and trying to drag-and-drop to reorder items in the backlog
- 2s delay when clicking in an issue in the backlog before the loading starts (add about 2s more for the loading itself)
- "Link issue" using a hyperlink icon. The whole idea of using the word "link" to talk about related issues without spending some time to think whether that's really confusing with hyperlinking
- Error messages about what "might" be wrong with things rather than what is actually wrong: "We couldn't save your comment, it might be empty or have invalid formatting". Well which one is it? Did you really have trouble deciding whether my comment is empty or not? Be specific and tell me what exactly is wrong, or do you want me to binary-search / bisect myself?
I think most of the issues can be fixed by completely rewriting the frontend.
Jira? Yes! Properly self hosted with skilled SysAdmins which don't allow every employee to install their favorite plugin. Then it runs fast and reliably.
The UI is very configurable and has many power user tools, like their search is very good and has so many features.
The Language for writing text is bad and sadly not even the same on other Atlassian products. Please just let me write markdown.
Confluence is another story. It is one of the better tools for documentation, but it's generally slow no matter who runs it. It has very quirky drawbacks like a page needing to have a unique name in a space even if it is in a completely different tree structure.
The editor itself from confluence hangs so much and has destroyed pages multiple times for me, thankfully the history is decent and you can recover usually.
I don't know anybody that likes using JIRA, but to be fair I don't know anybody that likes to use Asana either.
I might be wrong, but I think those tools are dedicated to something people don't like to do by default (planning, going through bureaucratic processes), so they don't like the tool either.
But for some reason, they convince themselves that they don't like to go through Bureaucratic Processes because of JIRA/Asana/etc... and that if only the tool was better it would be a breeze, when in my experience the process is the real issue, rarely the tooling (even though it surely can be improved)
Asana shows a flying unicorn when you complete a task. Makes my day every time.
A bit more seriously - it's not only about Bureaucratic Processes. UX matters. If you need to deal with "Bureaucratic Processes" using a terrible UI, it makes things worse.
Jira's UI is notoriously bad. Very slow. And the worst of all are the UX inconsistencies in Atlassian's products. Things behaving differently in Jira vs Confluence or even within Jira itself. For example, being able to use markdown when creating an issue but not when editing it (or vice versa, I don't remember).
I hate it. Being forced to document crap in a pile of tens of thousands of documented things is insanity. Nobody reads that stuff, but because it's documented, you can always say: "Wait, did you not read the documentation?"
Work goes faster if you keep it in small teams and let them self-organise.
I love this approach the team can pick which tooling they want to use, github wiki, readthedocs, docx on FTP, whatever works best for them. Each teams has its requirements. I don't see PMs using git/markdown to write their documentation, the same way don't make me write an API documentation on Confluence.
I liked it when I first switched to it from whatever was I was using ~2010 (Intervals, I think…). It still is somewhat better than a lot of task management/workflow software that you commonly find in enterprise. But it’s kinda just become another version of the thing it was supposed to destroy. The huge, complicated, bureaucracy management system.
I use Jira at work with a workflow that is highly customized to our development workflow, and I like using it.
There are some annoying aspects, but it's way better for our workflow than anything else I've tried (which includes github issues, gitlab issues, OTRS, RT, trac and a few others I've forgotten by now).
Over the past few years I've gone from despising Confluence to really, genuinely enjoying authoring in it. I find the syntax and shortcuts easy to remember and the UI pleasant. Hell, I even like the iOS app.
Jira and Bitbucket, OTOH, the less said about them the better.
Confluence has always confused me. Every aspect of the product sucked when I used it several years ago.
There was little to no discoverability, it was slow as shit and the editing tools weren't that good. The integrations to their other products were at the time pretty much non-existing as well. Always felt like it was really basic which made me wonder how it could be so slow.
Any other wiki software were probably better. Mediawiki is free but people still paid for Confluence.
I don't get why people used it and still use it to this day. I dislike pretty much every of their products though, they all suck in my opinion and I have written several JIRA plugins so I had extensive experience with it.
I like Confluence. It does the job and is simple to use. I use it at home for general household stuff and projects.
At work we are using a combination of notion and gitlab wiki and are probably going to move to confluence. Gitlab wiki is especially hard to use and thus stuff is under-documented. eg I'm a little project right now, we could use about 20 pages to document bits of it, this is such a pain in gitlab wiki.
I use Jira, it is okay. I think a lot of people here hate it because they associate it with Enterprise paperwork and rules.
I tolerate Jira (even though it is too slow and clunky to me). But I despise everything else from Atlassian. I don't understand how people can stand using Confluence or Bitbucket.
Anyone using these products has no choice, so, I would say that these products having any API at all that can cobble together automation is a huge blessing.
We have wrapper functions that allow us to automate all the painful interactions with atlassian software. None of them are clever in any way; they're rote. But without them, we weren't using the services fully, which is sophomoric since we don't have freedom to use other services.
I really liked, when I used them, how well integrated all the tools were: BitBucket, JIRA, Confluence...
You could see PRs related to a given ticket, see tickets in wiki, I'm sure it integrates well with CI (Bamboo/Pipelines?), etc. It might seem small, but such integration makes work more pleasant and comfortable.
Like many, I would rejoice to see Jira die in a fire.
But the fact is that if that happened, another tool would be used instead, and the same dysfunctional management processes would just re-emerge. Jira makes micromanagy bureaucratic process easy, but does not in itself cause it.
I love JIRA. It's versatile, as simple or complex as we need, integrates with many things and drives processes in the org.
People have many objections to using JIRA but often these are not about the tool but about the management procedures the tool serves.
I don't mind Jira. We're using Github Projects (because for OSS project where people report issues to the repo) and while new Projects has gotten significantly better, I do still miss some of the planning views that Jira gives you.
I have learned a lot of the hate towards jira including my own is the difference between a well setup config vs a bad one or where many people are, using it out of the bud.
I work with a bunch of 'data scientists' / 'strategists' and the like who love their notebooks but it's a pain to convert their code into an application!
In particular:
* Notebooks store code and data together, which is very messy if you want to look at [only] code history in git.
* It's hard to turn a notebook into an assertive test.
* Converting a notebook function into a python module basically involves cutting and pasting from the notebook into a .py file.
These must be common issues for anyone working in this area. Are there any guides on best practices for bridging from notebooks to applications?
Ideally I'd want to build a python application that's managed via git, but some modules / functions are lifted exactly from notebooks.
> Are there any guides on best practices for bridging from notebooks to applications?
The main point of friction is that the "default" format for storing notebooks is not valid, human readable python code, but an unreadable json mess. The situation would be much better if a notebook was stored as a python file, with code cells verbatim, and markdown cells inside python comments with appropriate line breaking. That way, you could run and edit notebooks from outside the browser, and let git track them easily. Ah, what a nice world would that be.
But this is exactly the world we already live in, thanks to jupytext!
Or you could do what I do, and write the report as specially marked comments in the actual code, which can be grepped out later to create a valid markdown document.
Pipe into pandoc, prepend some css, optionally a mathjax header, done. Beautiful reports.
Honestly I've yet to be convinced there's good reason for anything more than this.
Yes, I use a very similar setup with a three-line makefile to test and build. But the OP wanted to use the in-browser notebook interface, and this is srill possible via jupytext (while allowing collaboration with out-of-browser users).
1) This is painful. There are tools to help, but the most effective means I've found are having a policy to only commit notebooks in a reset, clean state (enforced with githook).
2) I don't understand. I've written full testing frameworks for applications as notebooks as a means of having code documentation that enforced/tested the non-programmatic statements in the document. Using tools like papermill (https://papermill.readthedocs.io/en/latest/), you can easily write a unit test as a notebook with a whole host of documentation around what it's doing, execute, and inspect the result (failed execution vs. final state of the notebook vs. whatever you want)
3) Projects like ipynb (https://ipynb.readthedocs.io/en/stable/) allow you to import notebooks as if they were python modules. Some projects have different opinions of what that means to match different use cases. Papermill allows you have an interface with a notebook that is more like a system call than importing a module. I've personally used papermill and ipynb and found both enjoyable for different flavors of blending applications and notebooks.
This problem is one reason why I'm a little mystified by Juypter's widespread adoption. It's got a lot of neat features but the Rstudio/Rmarkdown combo solves the above problem, and for me at least, that's decisive. As a tradeoff, you deal with an IDE that, in a bunch of ways, adds friction to writing Python code; but I gather that the Rstudio team is working on that (https://www.rstudio.com/solutions/r-and-python/). Not trying to start a flamewar here, I actually just don't get why Jupyter has become the default.
(Caveat that Jupyter is way better with e.g. Julia, in my (limited) experience)
For R&D the feedback loops are much tighter for sketching an algorithm line by line in Jupyter vs a Python file. Error in the 20th function? Ok fine then I’ll just change the cell it’s defined in and continue from the state of after the 19th. If I forget the layout or type of an object, just inspect it right there in a new cell.
Especially if it deals with multimedia, can just blit images or audio or HTML applications inline.
And it’s fairly trivial to go from Jupyter Notebook -> Python file once you’re done.
Specifically I think they were comparing rmarkdown vs jupyter. And it's really no contest, all the things people hate about jupyter are solved by rmarkdown (and org mode, but that's a harder sell)
The problem with RStudio is that it uses R, which while excellent at numerical calculations, is terrible at everything else - data parsing, string munging, file processing, ...
As the joke goes: The best thing about R is that it's designed by statisticians. The worst thing about R is that it's designed by statisticians.
specifically "data parsing", "string munging", and "file processing"?
I've used R extensively for all of these, and having recently re-visited the python world don't see any advantage that Python has over R for any of these tasks.
My wife has been learning Python (not a programmer) and now is looking at R. I thought she was going to like it as I personally think RStudio is nice. I was surprised she didn't like Rmarkdown after being exposed to Python notebooks, in particular she loved vscode + notebooks and immediate feedback and didn't like at all not having the markdown in RStudio interactively rendered and the R REPL. I have used very little R and I'm a heavy Python user so maybe I didn't know how to help her more effectively. I think I helped solving the main Python pain points: installing anaconda, vscode, the python extension and some additional auto completion. I don't use vscode (use Emacs) but it's great it's available for newbie users :p. Also, having Colab was nice for simple things.
To summarize: I think notebooks are great for newcomers. It requires more maturity to appreciate more principled programming.
Avoid if possible, is the easiest answer. Encourage your colleagues to move their code into proper packages when they're happy with it, and restrict notebooks to _use_ of their code.
Failing that, I think fast.ai's nbdev[0] is probably the most persuasive attempt at making notebooks a useable platform for library/application development. Netflix also has reported[1] substantial investment in notebooks as a development platform, and open-sourced many/most of their tools.
I've worked as a data scientist for quite awhile now in IC, lead and manager roles and the biggest thing I've found is that data scientists cannot be allowed to live exclusively in notebooks.
Notebooks are essential for the EDA and early prototyping stages but all data scientists should be enough "software engineer" to get their code out of their notebook and into a reusable library/package of tools shared with engineering.
The best teams I've worked on the hand off between DS and engineering is not a notebook, it's a pull request, with code review from engineers. Data scientists must put their models in a standard format in a library used by engineering, they must create their own unit tests, and be subject to the same code review that engineer would. This last step is important: my experience is that many data scientists, especially coming from academic research, are scared of writing real code. However after a few rounds of getting helpful feedback from engineers they quickly realize how to write code much better.
This process is also essential because if you are shipping models to production, you will encounter bugs that require a data scientist to fix that an engineer cannot solve alone. If the data scientists aren't familiar with the model part of the code base this process is a nightmare, as you have to ask them to dust of questionable notebooks from months or years ago.
There are lots of the process of shipping a model to production that data scientists don't need to worry about, but they absolutely should be working as engineers at the final stage of the hand off.
I agree with everything you said above and that is exactly how we have always had things at my place of employment (work at a small ML/Algorithm/Software development shop). That being said, the one thing I really don't understand is why Notebooks are essential even for EDA. I guess if you were doing things in Notepad++ or a pure REPL shell, they are handy, but using a powerful IDE like Pycharm makes Notebooks feel very very limiting in comparison.
Browsing code, underlying library imports and associated code, type hinting, error checking, etc., are so vastly superior in something like Pycharm that it is really hard to see why one would give it all up to work in a Notebook unless they never matured their skillsets to see the benefits afforded by a more powerful IDE? I think notebooks can have their place and are certainly great for documenting things with a mix of Markdown, LaTeX and code, as well as for tutorials that someone else can directly execute. And some of the interactive widgets can also make for nice demos when needed.
Notebooks also make for poor habits often times and as you mentioned, having data scientists and ML engineers write code as modules or commit them via pull-requests helps them grow into being better software engineers which in my experience is almost a necessity to be a good and effective data scientist and ML engineer.
And lastly, version controlling notebooks is such a nightmare. Nor is it conducive to code reviews.
There's an advantage to long-lived interpreters/REPLs on remote machines for the kind of work done in notebooks. Significant amounts of data may have to be read, expensive computation performed, etc. before the work can begin. Notebooks are an ergonomic interface to that sort of environment if one isn't comfortable with ssh/screen/X-forwarding/etc, and frankly nice for some tasks even if one is.
There's also a tacit advantage to notebooks specifically for Python as the interface encourages the user to write all of their definitions in a single namespace. So, the user can define and re-define things at their leisure within a single REPL/interpreter lifetime. A user developing against import-ed modules can quickly get stuck behind python's inability to cleanly re-import a modules, or be forced to rely on flaky hacks to the import system.
It pains me a bit to make the argument _for_ notebooks, but it's important to understand the attractions.
Thanks for sharing that perspective! It was helpful to get that POV. I agree that a requirement for long lived interpreters and a simpler UX to get up and running probably makes it an attractive option.
With VSCode having such excellent remote development capabilities now however, it feels like a nicer option these days but I guess only if you really care about the benefits that brings. Agreed about reimporting libraries still being a major pain point in Python, but the "advantage" for Jupiter Notebooks is also unfortunately what leads to terrible practices and bad engineering as most non-disciplined engineers end up treating it as one giant script for spaghetti code to get the job done.
When EDA involves rendering tables or graphics, notebooks provide a faster default feedback loop. Part of this comes from the assumption that the kernel holds state and data loading, transformations, and viz can be ran incrementally and without switching context. That's not to say that it's not possible to do with a python repl and terminal with image support, but that's essentially the value prop of notebooks. Terrible for other things like shipping code, but very good for interactive sessions like EDA work.
Personally, I find myself prototyping in notebooks and then refactoring into scripts very often and productively.
I've found myself in a data science group by merger and this(what type of artifact to ship) is a current team discussion point. Would you be willing to let me pick your brain on this topic in depth?
This is how my lab works. We do a lot of prototyping, exploring, making sure everything seems to be working, etc. and then pack it all into reasonably well documented standard code.
Learned this the hard way after working for a group for awhile with a single shared notebook I had nicknamed "The wall of madness".
Atom (editor) + Hydrogen (Atom plugin).
I like Hydrogen over more notebook-like plugins that exist for VSCode because it's nothing extra (no 'cells') beyond executing the line under your cursor/selection.
Then i just start coding, executing/testing, refactoring, moving functions to separate files, importing, call my own APIs.. rinse repeat.
I tend to maintain 3 'types' of .py files.
1. first class python modules - the refactored and nicely packaged re-usable code from all my tinkering
2. workspace files - these are my working files. I solve problems here. it gets messy, and doesn't necessarily execute top to bottom properly (i'm often highlighting a line and running just it, in the middle of the file)
3. polished workspaces - once i've solved a problem ("pull all the logs from this service and compute average latency, print a table"), i take the workspace file and turn it into a script that executes top to bottom so i can run it in any context.
This is a daily pain we've experienced while working in the industry! Our projects would usually allocate a few weeks to refactor notebooks before deployment! So we started working on an open-source framework to help us produce maintainable work from Jupyter. It allows easy git collaboration and eases deployment. https://github.com/ploomber/ploomber
I've been using ploomber for a month and so far, I really like it. The developers have been super helpful. It hits the sweet spot for writing developer-friendly, maintainable scientific code. Our data science team is looking at adopting it as our team's standard for deployments.
Admittedly, I'm one of those people. This problem also applies to the use of Excel for exploratory programming and analysis.
There are no guides that I'm aware of. Part of the reason may be a mild "culture" divide between casual and professional programmers, for lack of better terms. Any HN thread about "scientific" programming will include some comments to the effect that we should just leave programming to the pro's.
My advice is to immerse yourself in the actual work environment of the casual programmers: Observe how we work, what pressures and obstacles we face, what makes our domain unique, and so forth. Figure out what solutions work for the people in the trenches. My team hired an experienced dev, and I asked him specifically to help me with this. One thing I can say for sure is that practical measures will be incremental -- ways that we can improve our code on the fly. They will also have to recognize a vast range of skills, ranging from raw beginners to coders with decades of experience (and habits).
Jot down what you learn, and share it. I think our side of the cultural divide needs help, and would welcome some guidance.
I agree with you, having been on both sides of the divide and researched & written my masters thesis on teaching programming to undergrad science students.
Are you aware of https://software-carpentry.org/? It started after I graduated and I knew people who were involved with it at the time. It seemed like a good idea.
It looks like I didn't put it on Arxiv, so I need to find a copy and then put it back online :) Will reply here when I do, but likely to be a week+ before I do
There’s nothing wrong with excel (as long as you stay below the 64k limit). People use it because it works. That is almost tautologically close to whatever it is that software aspires to.
Excel has gotten more people to write code than all other programming environments together. And they’ve often enjoyed doing it. It’s a fantastic success story.
- When turning notebooks into more user-facing prototypes, I've found Streamlit is excellent and easy-to-use. Some of these prototypes have stuck around as Streamlit apps when there's 1-3 users who need to use them regularly.
- Moving to full-blown apps is much tougher and time-consuming.
This is a great insight! I think parameterizing the notebooks is part of the solution, moving to production shouldn't be time-consuming and definitely no need to refactor the code like I've seen some people do. I'd love to get your feedback. We're building a framework to help people develop maintainable work from Jupyter! https://github.com/ploomber/ploomber
First, yes, this is a common question. IPython does not try to deal with that, it's just the executing engine.
Notebooks, do not have to be stored in ipynb form, I would suggest to look at https://github.com/mwouts/jupytext, and notebook UI is inherently not design for multi-file and application developpement. So training humans will always be necessary.
Technically Jupyter Notebook does not even care that notebooks are files, you could save then using say postgres (https://github.com/quantopian/pgcontents) , and even sync content between notebooks.
I'm not too well informed anymore on this particular topic, but there are other folks at https://www.quansight.com/ that might be more aware, you can also ask on discourse.jupyter.org, I'm pretty sure you can find threads on those issues.
I think on the Jupyter side we could do a better job curating and exposing many tools to help with that, but there are just so many hours in the day...
I also recommend I don't like notebook from Joel Grus, https://www.youtube.com/watch?v=7jiPeIFXb6U it's a really funny talk, a lot of the points are IMHO invalid as Joel is misinformed on how things can be configured, but still a great watch.
I see where you're coming from. From where you sit Jupyter is a language agnostic tool and so in. But the fact that there's dozens of solutions in this space is surely a problem?
I'd have thought there would be some things you could strongly encourage:
1. Come up with some standard format where the code and the data live in separate files.
2. Come up with some standard format where you can take load a regular .py script as a cell based notebook using metadata comments (and save it again).
If these came out of the box it would solve most of the issues.
Funny you should ask. I just wrote a book called Effective Pandas[0] that discusses ways to use pandas (in Jupyter) that leads to easy re-use, sharing, production, testing. Here's a video with many of the ideas if you prefer [1].
People tend to have strong feeling when they see my pandas code as it is different from much of the (bad advice) in the Medium echo chamber. Generally, most who try it out are very happy.
The basics are embrace chaining, avoid .apply, and organize notebooks with functions (using the chain).
Oh, and Jupytext is a life saver if you are someone who uses source control.
The whole point of notebooks is to focus only on exploration of data, making some nice plots, adding some explanatory text, and NEVER think about software engineering.
A decent data scientist who also understands software engineering will sooner or later take the prototype code from the notebook and refactor it into proper modules. Either this or the notebook will become an unrunnable mess as it is developed further. Reusing code and functions in a grown notebook is just too fragile.
I'm working on a solution that helps with transforming notebooks into web applications (with GUI). You just need to define YAML config (similar to R Markdown) and the framework will generate web app with interactive widgets. After change in widgets, user clicks Run button and the whole notebook is executed, converted to HTML and displayed to the user.
The problems you mention are solved by auxiliary tools in the notebook ecosystem.
- Look at nbdime & ReviewNB for git diffs
- Checkout treon & nbdev for testing
- See jupytext for keeping .py & .ipynb in sync
I agree it's a bit of a pain to install & configure a bunch of auxiliary tools but once set up properly they do solve most of the issues in the Jupyter notebook workflow.
It is only a plan only (partially implemented). I am separating code to clean and ad-hoc. Clean code is "supported" - maintained (jobs monitored/failures handled/bug fixed) by more professional developers, if somebody what to have a custom job, they more or less on their own.
When I am asked to fix problem in such "custom" job, first thing I do is refactoring code to follow standards (configuration, hardcoded paths and values, logging, alert notification to predefined list of people related to project, handling recovery, etc.), than it becomes a part of main pool - "maintained code".
In VS code, .py file can work like a notebook. VS Code treats #%% as start of a cell, while being a plain comment when running the it as .py file. VS code can also convert an existing jupyter notebook to .py with this format
Instead of looking for a quick 1:1 conversation from notebook --> app, it should be a line by line re-creation using a notebook as more of a whiteboard.
This approach while much slower limits errors and ensures sustainability because both the notebook creator and the app creator will know what's going on.
I think solutions like papermill and others only work when you have infinite money and time.
I agree with the idea of using it as a whiteboard - when I need to do casual programming and data analysis for my non-software job I tend to work it out in a notebook first, then start combining all the smaller cells into larger chunks that I then can move into a proper python script.
This is a fundamental problem for me too. No source control, no tests, hard to extract into libraries. I'm surprised there isn't a better tool already.
if you are "cutting and pasting from the notebook into a .py file" you should look at `jupyter nbconvert` on the CLI.
I think there's ways to feed it a template that basically metaprograms what you want the output .py file to look like (e.g. render markdown cells as comments, vs. just removing them), but I've never quite figured that out.
All the open phones in the world won't help if you use closed-source WhatsApp / Facebook. And you kinda have to if you want to talk to your less tech savvy friends and family.
And open chat protiocols won't help because we don't have open source smartphones :) Maybe this battle is on different fronts and there are many factors that are into play.
They thought they were working for a "good, solid, respectable journal". Turns out their employer wasn't as respectable as they thought, so they quit - not over a disagreement with their peers but over a disagreement over the journal's standards.
To put a different spin on that: it would presumably be bad for a scientist's career to be associated with a garbage journal that publishes dangerous nonsense.
Most medical journals publish a lot of bullshit papers though. The only difference is that this one has political relevance, so in effect politics dictates what kind of bullshit people accepts.
Pretty sure they quit as an act of public activism about some popular political issue. Unfortunately, when scientist are also activists, they can no longer be objective. You don't see mass resignations when other crap papers get published. If it doesn't embarrass them politically, they accept it.
I think it's a combination. Bad papers that reach conclusions scientists agree with will be generally ignored, while bad papers which reach conclusions they don't like will trigger protest. This is especially problematic in fields where individual studies are extremely noisy and meta-analysis is needed to see the truth. Because the meta-analysis then just ends up showing the prevailing bias.
How do you know what’s happening? Because you read a single biased article? Quitting abruptly is not the proper way to handle this. It’s overly dramatic and the author of the article provides a one-sided view that reeks of the corrupt, establishment rhetoric spewing out of the virology community. Note that some of the scientists that quit worked on vaccines and the companies that produce these vaccines may have influenced the production of this article.
On the ends of the trip, the train often offers a night of accommodation that is required by the slower pace of the train travel.
For a business trip, I'll often leave on the first flight out Monday morning and fly back Thursday night or Friday night. If the train changes that into leaving Sunday evening and returning Friday or Saturday morning, the "included" nights of accommodation aren't an actual benefit as compared to the competitive mode of travel. Instead, they're a fix to a problem that the train created.
There are trips where I'd rather arrive the night before, so I'm fresh and unrushed for the first morning's meeting. In that case, the train could be better as I'm already taking a night away from the family for performance reasons.
I don't think you can always add in the price of a hotel night. If I'm flying from Paris to Copenhagen at 0900, arriving at 1100 and being in the city center by 1200, I wasn't expecting to have to pay for a night in Paris.
A train that leaves Paris the day before at 1347 and arrives at 0655 the next morning (current schedule) isn't as useful, unless what I really wanted was to spend all day in Copenhagen instead of half a day.
>A train that leaves Paris the day before at 1347 and arrives at 0655 the next morning (current schedule) isn't as useful
That's with several transfers. No way a direct train would take 17 hours for that connection. Most likely it'd leave late at night and arrive in the morning.
> If we correctly price externalities (i.e. environmental cost) then the train should win hands down!
I would not be so certain on that one. The downside of trains is the massive infrastructure requirements. I don't think there are any privately funded and profitable tracks anywhere in Europe. Government pays for this, of course. Of course, if you only account for carbon dioxide, things can look different.
In Europe, 60% of airports are government owned: https://simpleflying.com/how-airports-make-money/ From that page, Heathrow makes half its money from passengers from operating a train line into the city, car rentals, restaurants, retail, parking, VIP lounges.
Is that saying, if you couldn't extract money from a captive audience for how inconvenient the airport is, it wouldn't make enough to cover its own running costs?
Surely, trains need less infrastructure than cars - a road to every building in the country? Government pays for this, of course.
https://greennews.ie/eu-airlines-propped-up-subsidies/ claims that small European airports are not profitable, and are propped up by government subsidies, which RyanAir uses to undercut competitor rates, and they essentially act as a subsidy to RyanAir.
> Surely, trains need less infrastructure than cars - a road to every building in the country?
Trains need that too. It can be a road from the train station to the building, or it can be tracks. However in the end every building sometimes need something delivered.
Maybe the trains allows you to downgrade the road to gravel, but trains still need the road network for that last mile.
You could walk on a mud path to the train station. You could walk on a cobbled street, not wide enough or strong enough for cars, to the train station. It wouldn't be as convenient, but cars are useless without roads in a way that trains aren't.
A two-way road between every building, and space for on-road parking or space for off-road parking around every building, bloats out the space between buildings and lowers density in a way that makes cars more necessary. It's possible for thousands of people to live within a short walk distance of a train station without even resorting to residential towerblocks.
Which is great until you buy a new bed, your toilet breaks, or any other large service is needed in your house. Sure a plumber can carry everything to your house, but it is much more efficient when he drives a van with all the different pipe adapters that your might need instead of walking to the office. You won't get a heavy appliance down a mud path unless the delivery is scheduled for a few weeks after the last rain.
I agree we don't need large two-way roads everywhere. However we still need a lot of small roads everywhere because some things cannot be done well by humans walking.
In the context of this thread, can you say "it is much more efficient" to have every single house on the planet tarmac'd, on the off-chance that a plumber might need to carry more than one basket worth of stuff to your house?
The up front cost is enormous, the ongoing maintenance is huge regardless of usage.[1] says "deteriorating roads are forcing [American] motorists to spend nearly $130 billion each year on extra vehicle repairs and operating costs" and "The U.S. has [...] a $786 billion backlog of road and bridge capital needs. The bulk of the backlog ($435 billion) is in repairing existing roads, while $125 billion is needed for bridge repair, $120 billion for system expansion, and $105 billion for system enhancement (which includes safety enhancements, operational improvements, and environmental projects).", and of course the amount of people who die on roads, and the amount spent on motoring costs just because people have to run a car because everything is so far away because everyone has cars in a circular way.
Whereas if that wasn't such a convenient option, you'd be more likely to use parts which lasted longer, and not change them frivolously for fashion reasons, and standardise on pipe adapters, and have more local caches and stores instead of big central warehouses a long way away.
> "You won't get a heavy appliance down a mud path unless the delivery is scheduled for a few weeks after the last rain."
I'm not deliberately missing your point when I say this, but "it's impossible because that would require forward planning" does show society in a bit of an unfavourable light, doesn't it?
Where did I say anything about putting every single house on tarmac? I said several times that gravel is good enough for most roads. As you get into dense cities you will discover that tarmac is a better choice than gravel just because the large number of tasks that don't work well via mass transit makes the disadvantages of gravel show.
> "it's impossible because that would require forward planning" does show society in a bit of an unfavourable light, doesn't it?
Things break without warning. Or are you proposing we automatically replace our large appliances every few years even though they could probably last for 5 times longer? (even then you will still have random early failures). Not everything is worth repairing.
My original point was under "trains need more infrastructure than planes" to say "and less infrastructure than cars". You then said that trains need cars - which they don't. If you had mud paths, and wilderness between cities, and intercity train would be an improvement on that, so would a local metro. Cars wouldn't be an improvement - everyone getting a Honda Civic wouldn't be able to move on the too-small, too-muddy roads, it would be instant jam. So, trains don't need cars to add value.
I agree gravel roads at the end of a train journey with motorised vehicles would also add a lot of value.
Small-ish gravel roads for occasional supply vehicles to travel down isn't the original "road network" that I was arguing about efficiency of - small gravel roads in a world built around walking distances wouldn't be a world where everyone could run a Honda Civic and drive in two-way lanes of traffic. That is, by "cars" I didn't mean "motorised vehicles", but "everyone has a car and uses it for most journeys, and the road network to support that".
> You then said that trains need cars - which they don't. If you had mud paths, and wilderness between cities, and intercity train would be an improvement on that
All you needed to say was: the "wild west".
Trains provably added a lot of value for decades without cars, because cars didn't even exist yet.
It is actually rather simple. Airports are extremely expensive of course, but if you build three airports you have three connections. Four airports give six. Five airports ten... Each of these connections require dedicated tracks if you want to go by train.
The EU ETS applies to flights within the EU, so carbon externalities are already priced in.
Also the choice isn't between trains and poorly connected airports. You can build high speed rail connections between city centres and airports, and many cities do so.
I'm not sure how externalities should be priced, but it does seem like a good idea, especially in the age of climate change. It is likely politically and practically very difficult though.
France are I think effectively banning internal (non connecting) flights in return for the airline bailout. Most of the night trains are international, but the EU could ban short non connecting flights overall too at some point.
Don’t most airlines already offer to buy carbon credits to compensate the flight at a minimal price? Accounting for all other externalities shouldn’t make flight much more expensive.
There's a healthy power market - so the providers can and should work out a way for their clients to exit their contract and for them to sell the electricity elsewhere.
Sure the client might be liable for some loss-of-earnings fees but this really should be better for everyone than this.
Sure, but also take into account that a mining operation large enough to enter into electricity contracts probably has other obligations like facilities and staff. Overall it may not be worth the hassle to shutdown for what could be interpreted as typical exchange rate fluctuations.