I'm a PM and I ended up graduating with an econ BA instead of a CS degree, but I took a few intro CS classes at UCLA in...2011/2012.
Intro to Programming 1 and 2 were taught in C++. Can't remember which one taught pass by reference, but it was definitely in one of those two.
Third class I took was Intro to Systems or something like that. The whole class was C and x86 ASM. Lots of binary operations in that one, used K&R a fair amount in that class (also learned debugging assembly in GDB and some other "low-level"-ish stuff).
Just looked it up, can't say 100% it's still C++, but the syllabus looks about the same as I remember for both class. It gets to pointers by week 7, and then in the second class goes deeper:
And again, I didn't even get a CS degree. This was all lower-div CS work at a public university, and I'm not even a career engineer.
> So no, not knowing the difference between passing references or values, or pointers and dereferencing them is not as strange as you seem to think it is. It is not a piece of knowledge or experience that is seen as valuable enough by the people that create the curriculum or the companies that employ the largest quantities of inexperienced workers in this part of the world.
This attitude is why you're getting flak in this thread. Your claim that "We don't teach pass by reference these days" was too absolute, and not accurate for a ton of people. Then someone came back and told you that, and you told them that their claim was too absolute.
I'll also say that it's something that was absolutely valued around the orgs I worked in at Microsoft (Azure, DevDiv, Windows, very roughly bottom half of the stack teams). If not C/C++ pointers, than __absolutely__ passing by reference in C#.
Point being: __knowing__ about pointers, passing by ref vs. value, etc. is not as strange as __you__ seem to think it is.
I never stated my own position on this piece of knowledge, so no, __I__ did not have anything to do with your point.
C and C++ (and Assembly and compilers) are not part of the standard college software engineering curriculum here. Your opinion on that doesn't matter (just like mine doesn't matter) because it is a verifiable fact. And as such, it is also not strange to not see this bit of knowledge being prevalent. K&R isn't used much except if you are either taking the purely theoretical CS degree courses or if you tack them on to the normal required courses. Even the Gang of Four is only mentioned in passing when talking about patterns.
You __could__ argue if this is foundational knowledge, and if so, you __could__ argue that therefore the curriculum is in need of adjustment. But I didn't.
Regarding what this was all about (WrtCdEvrydy's comment), he might be talking out of the wrong hole, or he might be in a similar location as I am where this is how it works and that might be different from where you are.
PM lead for PowerShell here, thanks for the callout! I'll take "isn't the worst". ;)
I'd love to get more of your thoughts around how PowerShell might be more useful for the kinds of scenarios you're thinking about. We see a lot of folks writing portable CI/CD build/test/deploy scripts for cross-platform apps (or to support cross-platform development), but we're always looking to lower the barrier of entry to get into PowerShell, as it can be quite jarring to someone who's used Bash their whole life (myself included).
Structured shells have so much potential outside of that, though. I find myself using PowerShell to "explore" REST APIs, and then it's easy to translate that into something scripted and portable. But I'd love to get to a place one day where we could treat arbitrary datasets like that, sort of like a generalized SQL/JSON/whatever REPL.
Microsoft has always had this problem, but with PowerShell -- which is supposed to be this unified interface to all things Microsoft -- it is glaringly obvious that teams at Microsoft do not talk to each other.
To this day, the ActiveDirectory commands throw exceptions instead of returning Errors. Are you not allowed to talk to them?
The Exchange "Set" commands, if failing to match the provided user name, helpfully overwrite the first 1,000 users instead because... admins don't need weekends, am I right? Who doesn't enjoy a disaster recovery instead of going to the beach?
I'm what you'd categorise as a power user of PS 5.1, having written many PS1 modules and several C# modules for customers to use at scale. I've barely touched PowerShell Core because support for it within Microsoft is more miss than hit.
For example, .NET Core has caused serious issues. PowerShell needs dynamic DLL loading to work, but .NET Core hasn't prioritised that, because web apps don't need it. The runtime introduced EXE-level flags that should have been DLL-level, making certain categories of PowerShell modules impossible to develop. I gave up. I no longer develop for PowerShell at all. It's just too hard.
It's nice that Out-GridView and Show-Command are back, but they launch under the shell window, which makes them hard to find at the best of times and very irritating when the shell is embedded (E.g.: in VS Code)
The Azure commandlets are generally a pain to work with, so I've switched to ARM Templates for most things because PowerShell resource provisioning scripts cannot be re-run, unlike scripts based on the "az" command line or templates. Graph is a monstrosity, and most of my customers are still using MSOnline and are firmly tied to PS 5.1 for the foreseeable future.
Heaven help you if you need to manage a full suite of Hybrid Office 365 backoffice applications. The connection time alone is a solid 2 minutes. Commands fail regularly due to network or throttling reasons, and scripts in general aren't retry-able as mentioned above. This is a usability disaster.
Last, but not least: Who thought it was a good idea to strip the help content out and force users to jump through hoops to install it? There ought to be a guild of programmers so people like him can be summarily ejected from it!
Thanks for the thoughtful response! Many of these are totally legitimate: in particular, we're making steady progress to centralize module design, release, documentation, and modernization, or at least to bring many teams closer together. In many cases, we're at a transition point between moving from traditional PS remoting modules and filling out PS coverage for newer OAuth / REST API flows.
I don't know how recently you've tried PS7, but the back-compat (particularly on Windows) is much, much better[1]. And for those places where compatibility isn't there yet, if you're running on Windows, you can just `Import-Module -UseWindowsPowerShell FooModule` and it'll secretly load out-of-proc in Windows PS.
Unfortunately, the .NET problems are outside my area. I'm definitely not the expert, but I believe many of the decisions around the default assembly load context are integral to the refactoring of .NET Core/5+. We are looking into building a generalized assembly load context that allows for "module isolation", and I'd love to get a sense in the issue tracking that[2] whether or not fixing that would help solve some of the difficulties you're having in building modules.
For Azure, you should check out the PSArm[3] module that we just started shipping experimentally. It's basically a PS DSL around ARM templates, as someone who uses PS and writes the Azure JSON, you sound like the ideal target for it.
As for the help content, that's a very funny story for another time :D
It looks like the main problem people have with PowerShell is slow startup. You should probably work on making it snappy as main priority.
As far as module problems are in question, this is IMO not really fair - you can't expect that every team there have the same standards regarding how modules should work, no matter if the team is from Microsoft or not. The best you could do is perhaps form a consulting / standards enforcing team for MS grown modules.
I love PowerShell, its really poster child for how projects should be done on GH.
And I agree with you about REST API - I never use anything else to explore it (including postman and friends) - I am simply more productive in pwsh. We love it in company so much that we always create powershell REST API client for our services by hand (although some generators are available) in order to be in spirit of the language; all automatic tests are done with it, using awesome Pester 5.
Thanks for all the great work. PowerShell makes what I do joy to that point that I am always in it.
> we're always looking to lower the barrier of entry to get into PowerShell
I’ve used powershell regularly since way back when (it was still called monad when I first tied it).
I’m extremely comfortable in the Windows environment but even yesterday I found it easiest to shell out to cmd.exe to pipe the output of git fast-export to stop powershell from messing with stdout (line feeds)
I really like the idea of a pipeline that can pass more than text streams but it absolutely has to be zero friction to pipe the output of jq, git (and awk, sed etc for oldies like me) without breaking things.
We've fixed a ton of these in PowerShell 7 (pwsh.exe, as opposed to Windows PowerShell / powershell.exe), particularly because we needed to support Linux and more of its semantics.
If you're seeing issues within PowerShell 7, please file issues against us at github.com/powershell/powershell
In case you haven't seen it already, I found https://news.ycombinator.com/item?id=26779580 to be a pretty succinct list of the biggest stumbling points (latency, telemetry and documentation).
A couple of more specific points I'd like to add after experience writing non-trivial PS scripts:
- Tooling is still spotty. Last I used the VS Code extension, it was flaky and provided little in the way of formatting, autocomplete or linting. AIUI PowerShell scripts should be easier to statically analyze than an average bash script, so something as rigorous as ShellCheck would be nice to have too.
- Docs around .NET interop still appear to be few and far between. I recall having to do quite a bit of guesswork around type conversions, calling conventions and the like.
It's nice to see the docs have had a major overhaul since I last dug into them though :)
> we're always looking to lower the barrier of entry to get into PowerShell, as it can be quite jarring to someone who's used Bash their whole life (myself included).
apt search powershell returns no meaningful result on Debian unstable. I think that's a big barrier to entry, at least for me and people who deploy using docker images based on Debian and Ubuntu.
Good to know! I've generally understood that the bar for package inclusion for both Debian and Ubuntu is fairly high (where Debian wants you to push to them and Ubuntu will pull from you).
Our setup today is simply to add an apt repo[1] (of which there is just one domain for all Microsoft Linux packages), and then you can `apt install`.
We also ship pretty minimal Ubuntu and Debian (and Alpine and a bunch of other) container images here.[2]
Oh, and we ship on the Snap Store if you're using Snapcraft stuff.
Don't return everything, return what I specifically returned (yeah, I know about objects, talking about everywhere else).
I know it will never happen, but one can dream. Painpoints aside, you and your team are doing excellent job. Thank you
Edit: unless you are also responsible for DSC, than I'll take it back. It's terrible.
On a technical level, i would say PowerShell is a breakthrough because it democratized the concept of structured data REPL as a shell. This pattern was well-known to GNU (and other LISP) hackers but not very popular otherwise, so thank you very much for that. Despite that, having telemetry in a shell is a serious problem in my view. That, and other technical criticisms others have mentioned (see previous HN discussions about PowerShell) is why i don't use PowerShell more.
On a more meta level, i'd say the biggest missing feature of the software is self-organization (or democracy if you'd rather call it that). The idea is great but the realization is far from perfect. Like most products pushed by a company, PowerShell is being developed by a team who has their own agenda/way and does not take time/energy to gather community feedback on language design. I believe no single group of humans can figure out the best solutions for everyone else, and that's why community involvement/criticism is important. For this reason, despite being much more modest in the current implementation, i believe NuShell being the child of many minds has more potential to evolve into a more consistent and user-friendly design in the future.
Beyond that, i have a strong political criticism of Microsoft as part of the military industrial complex, as a long-standing enemy to free-software (still no Github or Microsoft XP source code in sight despite all the ongoing openwashing) and user-controled hardware (remember when MS tried to push for SecureBoot to not be removable in BIOS settings?), as an unfair commercial actor abusing its monopoly (forced sale of Windows with computers is NOT ok, and is by law illegal in many countries) and more generally as one among many corporations in this capitalist nightmare profiting from the misery of others and contributing its fair share the destruction of our environment.
This is not a personal criticism (i don't even know you yet! :)) so please don't take it personally. We all make questionable ethical choices at some point in life to make a living (myself included), and i'm no judge of any kind (i'll let you be your own judge if you let me be mine). In my personal reflection about my own life, I found some really good points in this talk by Nabil Hassein called "Computing, Climate Change, and All our Relationships", about the human/political consequences of our trade as global-north technologists. I strongly recommend anyone to watch it: https://nabilhassein.github.io/blog/computing-climate-change...
> how PowerShell might be more useful for the kinds of scenarios you're thinking about
I don't think i've seen any form of doctests in PowerShell. I think that would be a great addition for many people. A test suite in separate files is fine when you're cloning a repo, but scripts are great precisely because they're single files that can be passed around as needed.
> Structured shells have so much potential outside of that, though.
Indeed! If they're portable enough, have some notion of permissions/capabilities and have a good type system they'd make good candidates as scripting languages to embed in other applications because these applications usually expose structured data and some form of DSL, so having a whole shell ecosystem to develop/debug scripts would be amazing.
I sometimes wonder what a modern, lightweight and consistent Excel/PowerShell frankensteinish child would look like. Both tools are excellent for less experienced users and very functional from a language perspective. From a spreadsheet perspective, a structured shell would for example enable better integration with other data sources (at a cost of security/reproducibility but the tradeoff is worthwhile in many cases i think). From a structured shell perspective, having spreadsheet features to lay data around (for later reuse, instead of linear command history) and graph it easily would be priceless.
> I'd love to get to a place one day where we could treat arbitrary datasets like that, sort of like a generalized SQL/JSON/whatever REPL.
PS: I wish you the best and hope you can find some time to reflect on your role/status in this world. And i hope i don't sound too condescending, because if you'd asked me yesterday what i would tell a microsoft higher-up given the occasion, it would have been full of expletives :o... so here's me trying to be friendly and constructive as much as possible, hoping we can build a better future for the next generation. Long live the Commune (150th birthday this year)!
I read all of it, as well as some more of your writings that I found, and I very much appreciate your thoughtfulness. I don't agree with everything you've said here, but you raise some very good points. Thanks, friend. :)
I'm a PM at Microsoft that worked on open-sourcing PowerShell. As Windows PowerShell, it's a built-in Windows component, and the number of assumptions that could be made because it was both closed-source and part of the OS were immense. Even getting it to build outside of the rest of the operating system was crazy hard. Then figuring out to install it in a supported way was hard, and only then did we get to start figuring out how to eliminate usage of private APIs and start doing legal reviews.
And we're "just" a language runtime and a shell shipped in the OS. I can't imagine how hard it would be to unwind the proprietary bits of something like a GPU architecture running in a tightly integrated SoC with multiple vendors who are all deeply protective of their IP.
I say this all as someone who has been a proponent of open source for 15 years. And all the work was absolutely worth it, and deeply rewarding from a personal perspective.
But I can't say it would probably make the same sense for Nintendo to go through that effort with Pokemon or the N64 architecture.
Thank you for this story, it's very inspiring. I'm sure they will find something good they can open source eventually, even if it isn't those two things.
> If it becomes at all popular then you have further ongoing overhead.
THIS is the one that comes into play. All that stuff you listed before is a slog to go through when you go to open-source, but unless you're parking the thing as a proof-of-concept/end-of-life, you're now signed up for maintaining the repo. That means triaging issues, pull requests, helping out when contributors don't understand why their build is breaking, etc.
And there's the PM aspect of it: you don't just want to develop a bunch of features without talking to your community, so communicating (in a public friendly way) what you're planning to work on, when folks might reasonably expect that, how they might be able to help out, which of their contributed features you might be able to take depending on where you're at in your lifecycle, and RESPONDING TO COMMENTS all takes way more time than just "building [closed-source] product" in a team of 5-20.
And of course, one of the hardest of all: telling people "no, we can't take that change" when they've spent hours and hours doing work for your project for free. In that regard, we're still very much iterating on a transparent design process that allows for consensus BEFORE too much work has been done (though as we all know, building prototypes is often one of the best ways to find out if a design works right or not).
If you're doing it all right, everyone involved in the project should be doing some amount of all of this every single day. There's no compartmentalizing an engineer on an OSS project as "someone who just writes feature code" vs. "someone who does the repo stuff".
So going back to OP's point: no, it doesn't literally "cost anything" (or very much) to do the basic act of open-sourcing, doing it the "right way" at scale where you're truly engaging and working with the bazaar is very expensive.
Full disclosure: I'm a PM at Microsoft working on PowerShell and was heavily involved in it being open-sourced and ported to macOS/Linux.
Tangent: Could you explain why I might want PowerShell on MacOS or Linux, if I use those as my primary OS? (I usually only open my windows VM for checking that my websites work in IE 11).
Linux is actually half of our usage on PS Core[1], so it's a great question. A lot of folks use PowerShell inside of CI/CD pipelines, especially for cross-platform app development of .NET Core applications (having a single build/test/deploy script is often cited as vastly preferable to trying to maintain a split of .ps1/.cmd/.bat and .sh/.py/.rb).
But also, lots of folks prefer an object-oriented pipeline. Many of those folks (our primary use case for 10+ years has been IT management) are used to PowerShell on Windows and starting to learn or be exposed to other environments.
We've also got lots zsh-style optimizations in PSReadline. In some cases, we've got some catching up to do, but there's also lots of unique interactive optimizations hiding around[2].
It's also great for interactively exploring REST APIs and building scripts via that experimentation. Try running
And then look at all the properties you can explore:
$a | Get-Member [or gm]
$a.<tab><tab><tab>
Oh, and of course, one of the big reasons is "so you don't have to open a Windows VM to remote into your Windows Server boxes and manage them from a Macbook".
This is obviously not an exhaustive list, but it'd take a lot more time to write about every benefit and scenario here. In any case, it turns out that our user base is pretty spread out among different scenarios, but between our repo and usage numbers, we've been really happy with how excited that such a diverse group of folks really love PowerShell.
And feel free to reach out to me on Twitter @joeyaiello if you ever want to talk more about your experience (or just hop into our GitHub repo). :)
This was far more detailed than I expected. You seem very passionate for PowerShell. The rest method exploration looks very cool, I knew I might learn something cool by asking. Thanks!
I'm a huge privacy advocate, but telemetry doesn't have to be a dirty word. With PowerShell Core, we went through an RFC process to define our telemetry goals and implementation[1], we publish our data to a public dashboard[2], and all of the telemetry source code is out there in the open[3]. Our telemetry enables to help drive prioritization and decisions around platform/OS usage, and disabling it as simple as setting an environment variable[4].
If there are any other ways we could make our telemetry implementation more palatable without seriously reducing its usefulness, we're absolutely open to suggestions.
Thanks for replying. If your remove it, it is palatable. Stop carrying the flag for something detrimental to the end user.
Firstly anything that ships data out of a secure environment is a risk regardless of your intentions. You don’t always get the code right (this has already happened in .net core), the surface area of your software is multiplied (everything that sends telemetry talks to different hosts) and finally it adds a huge amount of noise to logs and IDS systems which make them less effective.
To stop this it requires a large effort and costs a lot of money. This adds a lot of noise to audits, requires administrative effort to silence, means we may not be within compliance of various data protection directives.
For this we get no observable product improvements. Look at windows 10 for all the telemetry. It’s a pile of crap.
Telemetry is literally a guessing mechanism so you don’t have to listen to your users concerns. A cost cutting exercise. You closed connect and didn’t listen to your users then. You cut off partner support pretty heavily. Now you collect data and make a finger in the air guess.
"without seriously reducing its usefulness" -- can you describe more clearly what the usefulness is that you are attempting to retain, and why that is important to your customers?
PM for PowerShell here, no ETAs right now, but we're working closely with some O365 teams (starting with Exchange Online) to bring their modules up to compatibility with PS 6.x.
As a product manager (technically "program manager" at Microsoft, but it's roughly the same as product managers across the industry), project management is one of the many things I do, in addition to the list above. But I am explicitly not a project manager in that I'm supposed to minimize time spent on that and share some responsibilities in that space with my engineering counterparts.
I haven't seen DomTerm before, but it looks pretty awesome. At a glance, it's basically a GUI-fied tmux hosted in Electron? It would be awesome to have in Windows, but wouldn't that just require that DomTerm add support for these ConPty APIs?
In any case, I'm more interested in your proposed interactions. Did you have anything cool in mind? Given that we ship PowerShell on Linux, we could theoretically do some stuff there (including within PowerShell on WSL) before it's hooked up to ConPty
I'm not the person you were asking, but this should interest you.
I've been working on a terminal emulator ( Extraterm http://extraterm.org/ ) with some novel features which would dovetail nicely with how PowerShell works. The first is the ability to send files to the terminal where they can be displayed as text, images, sound, etc or as an opaque download. Extraterm also adds a command `from` which lets you use previous terminal output or files, as input in a command pipeline. See http://extraterm.org/features.html "Reusing Command Output" for a demo. This opens up other, more interactive and iterative workflows. For example, you could show some initial data and then in later commands filter and refine it while visually checking the intermediate results.
What I would like to do sometime is integrate this idea with PowerShell and its approach of processing objects instead of "streams of bytes". It should then be possible to display a PowerShell list of objects directly in the terminal, and then reuse that list in a different command while preserving the "objectness" of the data. For example, you could show a list of user objects in one tab and then in another tab (possibly a different machine) grab that list and filter it the same way as any normal list of objects in PowerShell. You could also directly show tabular data in the terminal, let the user edit it "in place" in the terminal, and then use that editted data in a new command. It allows for more hybrid and interactive workflows in the terminal while still remaining centered around the command line.
Extraterm does these features using extra (custom) vt escape codes. ConPty should allow me to extend these features to Windows too.
Ooooh yeah, that sounds awesome! Going to share this with some people on our team (lots of folks love and use Hyper already, but we're always looking for new stuff to play with).
I would highly recommend you check out the excellent HistoryPx module[1]. Among (many) other things, it supports automatically saving the most recently emitted output to a magic `$__` variable. Theoretically, you could save a lot further back, but you may start to run into memory constraints (turns out .NET objects are a little heavier than text... ;) )
But yeah, it's even fully integrated into NixOS options now. You can set up a default install with one line: https://search.nixos.org/options?query=photoprism