Sat, 11 Dec. 2021, 18:00 UTC — Sun, 12 Dec. 2021, 18:00 UTC 

On-line

Space Security Challenge event.

Format: Attack-Defense Attack-Defense

Official URL: https://www.hackasat.com/

This event's future weight is subject of public voting!

Future weight: 1.00 

Rating weight: 1.00 

Event organizers 

The United States Air Force and United States Space Force jointly present this year’s Hack-A-Sat, which is open to all cybersecurity researchers who want to up their skills and knowledge of space cybersecurity. The challenge begins with a qualification round, and culminates in an attack/defend style Capture the Flag Event. It’s a space showdown designed to blur the lines between The Good and The Bad, and to focus all the best minds on creating a Cyber-Secure universe.

Prizes

$50K 1st place
$30K 2nd place
$20K 3rd place

hacker1natiSept. 18, 2021, 1:35 p.m.

hey i would like to know is the game in a physical address or online game


hacker1natiSept. 18, 2021, 1:35 p.m.

i meant i am suppose to come to us to play or are we playing it online


RedfordDec. 14, 2021, 7:22 p.m.

// There's a character limit for comments here, so I had to split the text.

So... This will be a long post. We had really high hopes and hype for the contest, but at the end the disappointment and frustration completely took over, even after finishing second and winning a big cash prize. I wish it was different, but I have to say that this was a pretty bad CTF.

Let's start with nice things:
+ Interesting and unique CTF area - this is so far the only CTF we know about security in space.
+ They slightly lowered formal requirements since the last year (but see below).
+ Big prizes.
+ They started to support international transfers for prizes.
+ The non-technical communication before the competition was *much* better than the last year.
+ The media, graphics and all bells and whistles around the CTF were as great as the last year, which is a really high bar to meet.
+ I liked psifertex as a presenter on the stream for technical stuff.
+ The qualifiers were _really_ good.

And the bad things, so many of them that I split them into categories:

Organizational:
- They still require at least one US national in each team. This is super-PITA for non-US teams. If you're making an international contest then make it truly international, not "international, but US teams have it 10x easier to participate".
- Slack channel with public announcements was not shared with FluxRepeatRocket, they learned about its existence only a few hours into the competition.
- Organizers ignored most of our questions during the competition, and if they replied then they were vague and indirect. It felt like instead of answers we got some wordplays and we were supposed to guess what the author has in their mind.

Competition:
- The organizers left a solver (or some tooling?) for chall2 in /tmp on one team's satellite. Ok, mistakes happen, but then they failed to fairly fix this. That team got a lot of advantage because of this - but I don't blame them as they reported this to the orgas, the problem is that the organizers didn't fairly fix the issue. The challenge should have been removed and points reset, plus all teams should have received a solver for it, to ensure that we're all in the same state (we actually did receive _something_, but it was only a part of the solver and fixing it took a few hours for some teams). Also, there was no clear communication about the incident, we still don't know what exactly happened.
- Scoring was a total blackbox, just single number was available (total score). No split into components available, neither the formula how the points were calculated. We were gaining and losing points, but no one knew why and how our actions influence them. E.g. at some point we started exploiting a challenge, but our points delta got lowered.
[next part below]


RedfordDec. 14, 2021, 7:22 p.m.

- One could say that the previous point is ommitting the fact that there was a visualization for SLA, so it wasn't that bad. But as it turned out during the CTF, the visualization is only loosely correlated with SLA. In organizers' words: "Clarification: Satellite visualization does not directly map into points for scoring. If your groundstation turns red, that does not necessarily mean that you are losing points for it, it is simply a basic visualization of the system.". ???
- We tried to reverse engineer the scoring formula, and it seems that the challenges almost didn't matter, almost all the score points was SLA. But maybe not, who knows.
- How SLA worked was a blackbox. We had to guess what counts as a "bad state" of the satelite and what doesn't. And some of the rules were completely nonsensical, which caused us to lose points when trying to optimize power consumption (one of the challenges). The organizers fixed that single particular issue, but we never got our SLA points back (despite it wasn't our fault).
- There were more incidents like this, and no one got back their SLA points afterwards.
- The CTF was advertized as Attack-Defense, but it was more like a jeopardy with just 8 instances of each challenge. We had almost no control over our services, and the rules forbade almost all offensive plays (like epxloiting a satellite and draining power, turning it away from the sun, or crashing some services on it).
- At some point organizers recharged batteries for all the satellites. There was no announcement that this will happen and optimizing battery use seemed to be one of the most important tasks. And suddenly they just reset the states, disregarding if you had 20% or 90% at that moment, all teams get reset to 85%.
- Similarly, in the same event, they redeployed all the satellites. This happened without any prior notice, they just said "we're now resetting your satellites". 17h into the competition, and you suddenly need to wake up your whole team and try to bring up and restore all the setup. This costed us about 2k points, because they restored the satellites to a different state than it was before.
- No public schedule of task releases or events like satellite resets. This matters a lot on A-D CTFs, where your points accumulate over time. Because of this we had no idea if we can safely go to sleep or not. E.g. PPP lost tons of points because of this - the reset happened when they were asleep, without prior notice.
- Our sat got deliberately crashed by one of the teams, we lost SLA because of this despite it was forbidden in the rules. And I'm not angry at that team for doing so, as the rules were quite confusing, but the organizers should restore our unfairly lost points. We didn't receive any reply from them when we asked about this (repeatedly).
[next part below]


RedfordDec. 14, 2021, 7:23 p.m.

- Suddenly, 3h before the end the scoreboard was frozen (i.e. stopped being updated). There was no mention of this in the rules or any prior announcement. Overall I don't like the idea of hiding the scoreboard for many reasons, but hiding it _and_ not saying before that this would happen is absolute worst.

Challenges:
- Overall, the A-D challenges themself seemed quite boring and the bugs didn't make too much sense and seemed artificial (not resembling a real bug a programmer could make, but see below).
- E.g. the crypto chall (chall3) had a backdoor, but added in a totally unrealistic way which didn't make too much logical sense to us - it was a deliberate and non-obvious backdoor in the math algorithm, which would make sense on its own, but they connected it with a very obvious fail (using rand() for key generation). You either backdoor the software and want it to not be detected easily (the first bug) or don't care (the second bug). But you don't do both at the same time? And I don't have a problem with unrealistic challenges per se, sometimes it makes sense to simplify some aspects to "distill" the interesting part of the problem (I even do this often myself!). But this case is different - they both made the challenge unrealistic _and_ obfuscated the math problem.
- There was something weird about the satellite simulation. The satellite was partially simulated, but we didn't know what was real and how the simulation worked (e.g. different sensors gave readings indicating differents orbits, they were probably simulated incorrectly resulting in contradictory readings; also, we thought that the battery is real, until we managed to _slow down the time_ by exploiting a bug, like literally slowing down the matrix...). All of this was a surprise, there was no description of the setup, so we assumed that this will be +/- a realistic scenario. But we got a half-simulated satellite, without any mention which part is real and which is simulated, we had to guess everything.
- There was only a single challenge available at a time for most of the competition, which made this CTF quite linear for (IMO) no good reason. A lot of people in our team just didn't have any task to do.
- Some challenges were released right before the end, which is a pretty bad practice in CTFs. It just adds more chaos and randomness to the competition in my opinion.


Sign in to comment.