Our business model is focused on providing VLEO systems (satellites, buses, and full mission services). But future customers may be provide image services :)
1. The proximity improves performance:
- Range^2: imaging, other sensing, closing link budgets (data you're sending or signals you're sensing)
- Range^3: SAR
- Range^4: Active sensing (lidar, radar, etc)
- Speed: Comms latency, time to intercept
- If you can build systems fast/cheap, this physics unlock creates a new paradigm for system architectures (compared to traditional cost/schedule/performance tradeoffs)
2. Diversification is important for resilience/deterrence
- Drag self cleans debris
- It's below the belts where radiation gets trapped after a nuclear detonation. Makes for not only a survivable orbit, but one with assured reconstitution
I work near space but not in space. I'm not sure I understand your process here. I see 2 possibilities: 1. You bought something the manufacturer spec lied about. While true we often validate specs, our terrestrial stuff is a lot cheaper so we can afford the spares. That said, if we buy something that doesn't meet the spec, you best believe we're taking the actions necessary. 2. This was built or designed inhouse, and the requirements didn't flow down correctly. That's also not great.
To be honest, postmortems (especially from startups) toe a fine line of scaring off investors, and this write-up seems a bit too glaze-y. I'm very happy for you that so much worked so effortlessly post launch, but that's more a success story than a postmortem. I'd like to see more of the root cause analysis for the issue, both technically and programmatically.
To be certain, if you're in the trenches of this anomaly investigation you'll get the full root cause and corrective action presentation, but that's not what this post is for.
You're correct on 1, we ended up hitting an edge case in their spec that they hadn't adequately tested to and the upper level management and engineering leadership were swift to accept the fault and implement fixes with us going forward.
From a SE perspective, as a "COTS" product, we had spec'd correctly to them, they accepted our requirements and then executed each unit's acceptance test plan (aka lower level than first unit quals or life tests where this should have been caught) on the ground without anything amiss. We ran through our nominal and off nominal cases at the higher level of assembly, but not for a duration that caught this on the ground. It wasn't until we were at extended operation on orbit the issues began.
Sadly like you state, space isn't like on the ground, you can't buy spares or replace things that fault, even for a true high volume COTS product that might slip through the acceptance testing.
> We ran through our nominal and off nominal cases at the higher level of assembly, but not for a duration that caught this on the ground. It wasn't until we were at extended operation on orbit the issues began.
So I think that's a great answer. It's all about risk mitigation and tolerance. Your test tested if the part work to a reasonable and hopefully calculated level. It's good that the suppliers' management accepted fault, too. It's a lot harder when they don't but honestly in the professional world I've found that to be much rarer than consumer.
To me, and I'm not an investor, and probably not your target audience, those 3 short paragraphs told me a lot more in a positive way than I expected. I don't think it would be out of place to put it in the post. Honestly as is I thought this was your guys' fault for myriad reasons. Now I'm flipped the other way. Of course it's still your problem even though it's not your fault. Or, maybe, you do claim some blame for the worst case analysis not shaking out that edge case. Either way I feel much less like you guys just went to the hardware store, bought some random lube, packed the bearing, and shipped it thinking you'll figure it out on the next launch (which is sadly the fast and loose reputation new space is starting to get).
Founder/CEO of Albedo here. We published a detailed write-up of our first VLEO satellite mission (Clarity-1) — including imagery, what worked, what broke, and learnings we're taking forward. Happy to answer questions.
Clarity is designed for a GSD (ground sample distance) of 10 cm. Generally the industry uses resolution<>GSD interchangeably. Agree it's not the true definition of resolution. But I'd argue the diffraction limit is an incomplete metric as well, like how spatial sampling is balanced with other MTF contributors (e.g. jitter/smear). For complete metrics, we like 1) NIIRS or 2) % contrast for a given object size on the ground (i.e. system MTF translated to ground units, not image-space units).
The main performance goal for us was NIIRS 7, and we decomposed GSD/MTF/SNR contributors optimized for affordability when we architected the system
How do you manage along-track smear? At those altitudes you're pushing close to 8km/s. Traditionally you'd either need to keep the satellite rotating through the collect or somehow keep the integration time in the single digit microseconds.
A little more detail that we didn't get into in the post. The 3-CMG control mode we first uploaded was v1. We had plans to improve the agility with later versions. In v1, we didn't have quite enough rate to match earth's, even with max TDI. We called it Banana Scanning. Kind of like slipping over the earth.
Net - the CMG imagery we captured had a few pixels of along-track smear in it. Which would have been removed in post-processing if we had made it through focus cal.
How did you manage meaningful attitude control with only torque rods? They would need to big (read: heavy) to be useful — was this just stabilising in inertial frame or active pointing?
Mag dipoles in chassis and components tend to lock tumbling satellites into the Earth’s magnetic field. Did you see this? Or did you see atmospheric drag dominate at this altitude?
I'm AyJay, Topher's co-founder and Albedo's CTO. We'll actually be publishing a paper here in a few weeks detailing how we got 3-axis torque rod control so you can get the real nitty gritty details then.
We got here after stacking quite a few capabilities we'd developed on top of one another and realizing we were beginning to see behavior we should be able to wrap up into a viable control strategy.
Traditional approaches to torque rod control rely on convergence over long time horizons spanning many orbits, but this artificially restricts the control objectives that can be accomplished. Our momentum control method reduced convergence time by incorporating both current and future magnetic field estimates into a special built Lyapunov-based control law we'd be perfecting for VLEO. By the time the issue popped up, we already had a lot of the ingredients needed and were able to get our algorithms to control within an orbit or two of initialization and then were able to stay coarsely stable for most inertial ECI attitudes albeit with wide pointing error bars as stated in the article. For what we needed though, it was perfect.
I'd love to read this paper! This was on my mind when I was GNC lead for an undergraduate project at Michigan Tech (Oculus-ASR - Nanosat-6 winner). We had a combined controller for reaction wheels and magtorque rods.
>The drag coefficient was the headline: 12% better than our design target.
Is the drag much better than a regular cubesat? It doesn't look tremendously aerodynamic. From the description I was kind of expecting a design that minimized frontal area.
>Additional surface treatments will improve drag coefficient further.
Is surface drag that much of a contributor at orbital velocity?
Ultimately it's about the ballistic coefficient. You want high mass, low cross-sectional area, and low drag coefficient (Cd). With propulsion for station-keeping, it's challenging to capture the VLEO benefits with a regular cubesat. That said, there are VLEO architectures different than Clarity that make sense for other mission areas.
Yes it's a big contributor. The atmosphere in VLEO behaves as free molecular flow instead of a continuous fluid.
> It is undesirable to have a definition that will change with improving technology, so one might argue
that the correct way to define space is to pick the lowest altitude at which any satellite can remain in orbit,
and thus the lowest ballistic coefficent possible should be adopted - a ten-meter-diameter solid sphere of
pure osmium, perhaps, which would have B of 8×10^−6 m^2/kg and an effective Karman line of z(-4) at the
tropopause
Assuming I did the math right such a satellite would only run $265 million USD for the materials (launch costs for an object of ~9k kg left as an exercise for the reader). That's far more affordable than I had expected. Amusing thought.
First - they never want to use someone else software framework again (an early SW architect decided that would accelerate things but we ended up re-writing almost all of it) and it was all C++ on the satellite. We ran linux with preempt_rt.
We wrote everything from low level drivers to the top level application and the corresponding ground software for commanding and planning as well. Going forward, we're writing everything top to bottom, just to simplify and have total ownership since we're basically there already.
For testing we hit it at multiple levels: unit test, hardware in the loop, a custom "flight software in test" we called "FIT" which executed a few different simulated mission scenarios, and we tried to hit as many fault cases as we could too. It was pretty stressful for the team tbh but they were super stoked to see how well it worked on orbit.
A big one for us in a super high resolution mission like this is the timing determinism (low latency/low jitter) of the guidance, navigation, and control (GNC) thread. Basically it needs execute on time, every cycle, for us to achieve the mission. Getting enough timing instrumentation was tough with the framework we had selected and we eventually got there, but making sure the "hot loop" didn't miss deadlines was more a function of working with that framework than any limitation of linux operating well enough in a RTOS fashion for us.
Moving fast to make launch, we had missed a harness checkout step that would’ve caught a missing comms connection into an FPGA, and it was masked because our redundant comms channel made everything look nominal.
On orbit, we fixed it by pushing an FPGA update and adding software-level switching between the channels to prove the update applied and isolate the hardware path — which worked. Broader lesson, it is possible to design a sw stack capable of making updates to traditionally burned-in components.
> it was masked because our redundant comms channel made everything look nominal.
Hah, this has bitten me often enough I check for it in test suites now - ok, you’ve proven the system works and the backup works, have you proven the primary works? Another in the long list of ways you don’t expect a system to bite you until it does…
I'm just saying that if they use a traditional IP VPN over the Internet, traffic is of course encrypted, but the two endpoints terminating the VPN must have a public IP.
Of course, this is not necessary if they simply use UHF radio signals encoding bits with pulse modulation, or whatever.
Indeed, that's indeed what I was curious about (Internet vs other channels), since at least part of their tech stack in orbit is relatively straightforward (e.g. linux with preempt_rt).
Why would you VPN here? If you did, why would you do so over the public internet? You can route IP packets over internal links, including via radio (that is the entire point of WiFi after all).
Although it occurs to me that "does your network stack employ either ethernet or IP (and what were the considerations)" might be an interesting question.
Let me dream about the one guy working remote for a satellite company and just jumping into a direct VPN with a satellite, won't you? :)
All kidding aside, there are some protocols, like FTP or RTSP, which don't play well with NAT because they include IP addresses in the payload itself. Some solutions exist (so called ALG) but they are often fragile. If the satellite was using some of these protocols to talk to something on a public cloud platform (say, send images via FTP to an EC2 VM), satellites could have a public address to avoid NAT issues, and that point you could also use it as a management address (although maybe only as a backup path).
It's a bit far fetched, but when it comes to satellites, you could say "sky is the limit" :)
Ignoring for a moment the fact that Grafana could be self-hosted or in SaaS, Grafana is heavily used to collect logs and metrics from standard servers.
Of course, maybe they built their own integrations to convert raw logs and metrics sent via plain pulse modulation to plain syslog and Prometheus metrics, but maybe it's just that they're using (probably private) IP addresses on board and they are simply streaming logs and metrics to the ground using standard TCP/UDP protocols.
From my perspective, the number one reason we had a well functioning satellite out of the gate is my philosophy of testing "safe mode first". What that means is in a graduated fashion, test that the hardware and software together can always get you into a safe mode - which is usually power positive, attitude stable, and communicative. So our software integration flows hit this mission thread over and over and over with each update. If we shipped a new software feature, make sure you got to safe mode. If we found a bug that prevented it, it's the first thing to triage. We build out our pipelines to simulate this as much as we could and then ran it again on the development hardware and eventually would load a release onto flight once we were confident this was always solid. If you're going to develop for space, start here.
That principle goes far further than developing for space, but for space the pay-off is the largest. It also applies to maritime, medical, aviation and mining and probably other domains where whatever you make it going to have to function even when you can not reach it at all.
But it is great to point it out and to show how essential this kind of thinking is, and how it can help to focus on what is really important and what can be fixed.
What is interesting is to theorize about the relative impact of losing any of those three and how you managed to fix the second because you still had power and were able to communicate with the device. I think within those the order of relative importance would be communications, power and then attitude but I could well be mistaken.
At least it wasn't "learns", like "we had five learns from the project". Like, say, "ad spend". There's already a noun form of a verb, it's called a gerund: "ad spending".
As with genes, duplication creates opportunity for specialization. Regardless of what drives the duplication and early divergence.
AIchat "compare and contrast the subtle implications of phrase/X with phrase/Y" suggests using "ad spend" for a number (like "budget"), and "ad spending" for activity and trend ("act of spending").
"Learns" has implications of discovery, smaller size, iterative, informality, individual/team scale, messy, and more. For illustration, to my ear, "Don't be stupid" fits as a "lesson", but not as a "learn" or a "takeaway". Nor as a "lesson learned", with its implication of formality and reflection. "Software X is flaky" fits better "learn" than "lesson". And "unmonitored vendor excursions are biting us" more a "takeaway" (actionable; practical vs process).
Great question. Orbit Fab plans to launch fuel depots into space, somewhat aligned with orbits of their customers. For most cubesat/smallsat constellations, it probably does make more sense to just launch a new fleet. Our satellites will be refrigerator size and a bit pricier, so it makes more sense to refuel and extend the life.
Yeah check out our careers page!
Great question. The 3 tiers correspond to how much imagery is ordered per year, the more you order the cheaper the per unit price is. The "minimum order" corresponds to the area size on the ground for a single tasking order (a single image).
The comparison shot was taken with a drone at an altitude that corresponds to our 10cm resolution. The 30cm version was obtained through downsampling the 10cm image accordingly. I wish we had a real 10cm satellite image to use, but our satellites won't be on orbit until 2024, so we are using a drone to show examples in the meantime.
I don't have any papers offhand, but the wikipedia page on microbolometers is great. Traditional sensors for thermal imaging are made of materials like InSb and HgCdTe, which require cooling to cryogenic temperatures in order to limit dark noise. These types of sensors are similar to visible sensors where an electron is generated for each incident photon. Microbolometers operate fundamentally differently, where electrical resistance in response to temperature changes. While most space thermal sensors today still use traditional materials, most terrestrial thermal cameras (night-vision, thermal drones) use microbolometers when imaging in LWIR.
It is a raw drone image. I probably should have picked a different option, as this was collected just after sunset, which gives a nice uniform lighting but also leads to more noise and some of those artifacts you mention.
The resolution improvement has nothing to do with software (although there are super-resolution methods to improve it). It has to do with both flying very low as well as building a larger telescope than normal. While the satellites will still be much smaller than the traditional approach to capturing 10cm resolution, they will be larger than the new space smallsat movement we've been seeing the last few years.
Thank you!