Hacker News new | past | comments | ask | show | jobs | submit login
New campaign targeting security researchers (blog.google)
235 points by el_duderino on Jan 26, 2021 | hide | past | favorite | 87 comments



You’d think that sending the world’s foremost security researchers malware would be a stupid idea, but they’re people too. They run Windows and Chrome just like everyone else, and (for the most part) nobody can really live with “I’m going to be targeted by a nation state” as their threat model, not even those who are aware of what the capabilities of an advanced zero day are. It’s kind of depressing if you think about it: nobody has really solved “what do I do if I have a government against me” yet…


I guess many software developers and security researchers tend to have worse system security due to what they do - they are likely to have an experimental environment with code signing disabled, system libraries modified, no root password, and routinely download and run random PoCs and binaries for analysis. Fortunately, most use virtualization for that, and it should protect them from most threats (unless it's a VM escape 0day), even better is to use another machine in the room (use a VM if you must access it remotely, file transfers can be compromised). But one must have the discipline to keep personal, production, and experimental works compartmentalized - never access your home network, personal accounts, or production server from the experimental environment. As time goes by, many usually unintentionally violated the trust boundary for convenience and create a serious security problem.

But this attack is nasty - even one follows all of these security measures I mentioned - it provides no protection. The attackers are not targeting security researchers personally but only their exploits. If you only have a single "untrusted" experimental system for reverse-engineering and testing all your exploits, once it's compromised, everything can be stolen. A better compartmentalized environment is possible (e.g. one for all untrusted files from the web, one for personal development, and disallow the exchange of any non-text files between them), but it's an order-of-magnitude more difficult to use.


I agree that everyone, even security researchers, will make mistakes. But there are people who survive with state-level actors in their threat model. These people probably 1) do not post their threat model and mitigations in easy-to-Google places and 2) have the help of one or more other state-level actors.


Well it’s twofold: one is that security researchers will use bad passwords and click in shady links just like anyone else, and the second part is that even people with state-level adversaries that are actively trying to avoid getting hacked (journalists, whistleblowers, the like) get hacked anyways because they…carry an up-to-date flagship. There really doesn’t seem to be actual protections against a determined state actor short of not using computers…


"Basically, you’re either dealing with Mossad or not-Mossad. If your adversary is not-Mossad, then you’ll probably be fine if you pick a good pass-word and don’t respond to emails from ChEaPestPAiNPi11s@virus-basket.biz.ru. If your adversary is the Mossad, YOU’RE GONNA DIE AND THERE’S NOTHING THAT YOU CAN DO ABOUT IT. The Mossad is not intimidated by the fact that you employ https://. If the Mossad wants your data, they’re going to use a drone to replace your cellphone with a piece of uranium that’s shaped like a cellphone, and when you die of tumors filled with tumors, they’re going to hold a press conference and say “It wasn’t us” as they wear t-shirts that say “IT WAS DEFI-NITELY US,” and then they’re going to buy all of your stuff at your estate sale so that they can directly look at the photos of your vacation instead of reading your insipid emails about them."[0]

[0]https://scholar.harvard.edu/files/mickens/files/thisworldofo...


I feel like James had the deadline wrong in his calendar and this article is what "Right, sorry, I'll get it over to you by the end of the day" looks like when there is not in fact already a pretty much complete piece that just needs some polish but instead an empty Word document and half an idea in your head.

Like so many people, James is pretty confident that anything he doesn't understand (including apparently elliptic curve cryptography) is probably unimportant, and that the solution to his pressing problems is just to make something he knows isn't possible easy (remembering a separate strong random password for every site) so the people who are working on stuff James doesn't understand ought to work on that instead.

This piece was written, I think, slightly before BCP 188 ("Pervasive Monitoring Is an Attack") but to me it feels as though that's the answer to it. Yes, the NSA (or Mossad, but realistically the NSA) could definitely win if that's what it came down to, you or them. But that's very rarely the situation. Their budget, though large, is finite, and your value, even if large, is also finite. If snooping every word said on the telephone by an American costs 5¢ per citizen, why wouldn't the NSA do it? Worth a shot. But if it costs $5000 per citizen that's gonna blow their budget, and for what? So that's what BCP 188 is about, the question isn't whether you're dealing with "Mossad or not-Mossad" it's whether you are the Protagonist or just another extra. We can't make it impossible for a sophisticated and resourceful adversary to succeed, but we can make it very expensive so that they are obliged to choose their shots.


> But if it costs $5000 per citizen that's gonna blow their budget

The end result is that they split the type of surveillance between "cheap" blanket surveillance, and targeted surveillance for the targets that are deemed valuable enough, while also striving to drive the "per target" price down.

Mass surveillance offers a good opportunity for economy of scales, and gives you a very granular estimate of how valuable a particular target is.


I mean, it is pretty clear the piece is supposed to be burlesque, right? Do you actually think James is trying to write about how cryptography is totally useless and we should just give up?


It's certainly busking, which, I dunno if this is a regular column he did, but if so as commissioning editor I'd be pretty unhappy with that. I was serious that this feels like it was churned out at pace.

I can't see a way to interpret this that doesn't come back to, fix passwords and stop bothering with this other stuff. In some forms (e.g. satire) you are supposed to sneak in an actual point you wanted to make (e.g. Swift's "Modest Proposal" lists the things Swift thinks would actually work, pretending to dismiss them as inferior to eating babies). But I believe in Burlesque it is considered satisfactory just to point and laugh. I didn't laugh, maybe that's on me.


So, just for context, he wrote a number of these: https://mickens.seas.harvard.edu/wisdom-james-mickens. They're joke articles meant to satirize some field of computer science; cryptography isn't the only topic he discusses.


Six articles like that :( Worse, it appear this is his style everywhere, including live in person. Perhaps somewhere in amongst this James is actually an expert on something who has useful knowledge to impart to Harvard's students, but perhaps not? Maybe you really can go to a "lecture" in which a tenured Harvard professor expects you to laugh at jokes which even by the already woeful standards of Computer Science jokes, are not funny. Ouch.

One of these articles proposes that the problem with smartphones is that they aren't very good phones. In this "satirical" form it proposes a pyramid shaped "hierarchy of needs" for phones with "Make phone calls" as the most important element at the bottom.

Perhaps in 2014 that felt like an insight, to James Mickens or to his readers. I don't think so, but maybe 2014 is longer ago than I think it is, and maybe nobody had noticed back then that (and I apologise if this is an amazing insight to you now):

Calling them phones was an excuse. People aren't very good at figuring out what they actually want, so telling people we're going to offer them Network capable handheld computers wouldn't work, they don't realise they want those. So you say these are "phones" and then let them gradually figure out that actually they have never wanted to make a telephone call in their life but they did want a handheld computer to access the Network.

The form factor makes no sense for a phone. Clearly a rectangular sheet of glass isn't the right shape for a phone. But it is a good shape for a handheld computer. Which, again, is what you actually wanted anyway.


Y'all are getting help from your governments?


> They run Windows and Chrome just like everyone else

Citation needed.


I can't give you a citation beyond just finding security researchers and look at what they are using. Yeah, there are a couple of people using something special, but for the most part everyone works on Ubuntu LTS and Firefox, or macOS–the usual OSes–so they can get their work done.


google.com/?q=windows%2010%20market%20share

google.com/?q=chrome%20market%20share


Sure, but you must admit that there's probably a difference in the software used by security researchers and that used by average people. I know some who use linux for their day-to-day operations, and a few more who use OpenBSD. Though I'd say you're right that the majority (especially those who work in more "corporate" jobs) use either windows or mac.


This attack is particular nasty because it was not targeting security researchers personally but their exploits - which is more difficult to protect. Imagine I'm a developer who follows good security practice, I use the web, read mails, and write code in a fully up-to-date OpenBSD system, I use pledge() sandboxing and BSD jails routinely, and I don't open any untrusted nontext file due to its risks. Instead, I do all the risky security analysis in an insecure Windows machine. Now, if someone is going after me in person, I'm well protected. But if someone is going after my exploits, not so much, this setup has zero protection. If the machine contains PoCs from my previous projects, all of them will be stolen. Protection is possible (e.g. separating the test environment to "trusted" and "untrusted" parts, keep sensitive PoCs in the trusted system, do test/RE in an "untrusted" system, roll back the system image every time one finishes an "untrusted" work), but more difficult.


I wouldn't bet on every security researcher also being a security enthusiast and vice versa. Also it depends on your scope. Plus the attacks originated from conversations mainly via Twitter and other social media and email. I am probably wrong, but even a security researcher wouldn't boot up a readonly OpenBSD live system with a cli-based browser just to access Twitter, as you are trying to imply. Also the article mentions compromising people with forged VS Studio projects relying on Powershell. To me this is pretty much "average people" software.


That's not the point. the cover was a Windows exploit, "please run it", for whatever reasons, presumably collaboration followed by monetary rewards, respect, etc.

Why would anyone run that code outside a sandbox? it is unclear. even more so a security researcher.

Come on, it is a single click to launch a Windows sandbox... was the source extremely trustworthy? fine, that would be the same as giving your password to your spouse.


IT is part of the business. The business says jump and you say how high or you join the people camping outside.


Nice try, but I expect a security researcher to understand the very basics of security and avoid putting sensitive data on untrusted OSes.

People have been using testbeds with dedicated hardware since day 1 of computing.


Yup, I am sure you put all data from coworkers or business partners on a completely isolated and safe system first, too. Even if the premise is to work on an ongoing project of yours. Sure.


What's so special about a state-level actor? What can a North Korean government do to a security researcher that a medium-sized Russian company cannot (not necessarily state-affiliated, just a company from any country which do not cooperate with the West)? North Koreans don't have more money and certainly they don't have more brainpower.


They operate with impunity because even if caught, they will not be punished. They also usually have large budgets and significant amounts of manpower.


Same with Russian company. Carders who only steal from Western banks, are walking in the open in Russia.


Incentives, I would think. There are already smaller, private groups that have nation-state level zero days (after all, they sell these to governments…) but they would also need someone to use it on and the ability to ignore the consequences, neither of which seem to really be the case for a private company.


In my (direct) experience most security researchers, have very poor practices in general. I come to this space from the side of anonymity and general paranoia - because I find that interesting. That paranoia leads to horrible security experiences for me, but I feel much safer in my practices.

If you always assume your data is at risk, and that your data can become your person, you'll take care of it. In the modern era, we're almost post-goods. I couldn't care less if someone broke into my house and stole my TV/monitors/stuff - and it's happened. But the last thing I'd want stolen is data I hold near and dear.


If you don't mind adding an additional layer to your paranoia, I suggest seeing this video on the 2012 solar storm: https://youtu.be/hESunUuFrzk

The video will show how your data can be suddenly destroyed if all you have are digital backups.

Seeing this video led me to buy a laser printer so I can have paper backups: http://ollydbg.de/Paperbak/


I have to ask because my imagination is failing me: what kind of private personal data do you have that is so important that you're worried about it surviving this sort of thing?


Notes and personal source code (private projects, automation scripts, dotfiles...). Not important to anyone but myself :)


Important enough to go through the effort and expense of printing them out and (I'm presuming) scanning them back in?

I mean I guess it just comes down to priorities. I can't think of anything I want to keep so badly I'd bother with that.


I still haven't done it yet, but I plan to do so.

All this important data (the directory I keep my git repos) is less than 90mb in size. With 500kb/page I could store everything in about 180 pages (90 if I print on both sides) which is not that much.

Since I store backup of these git repos in a ZFS dataset, I can do incremental backups (with zfs send -I) every 6 months or so. By my estimates, I would be adding at most 1-2 pages per year.

Also, it is not like I expect to ever need to read that back in. I also keep redundant backups of everything in multiple cloud providers and in external HDs. Using these papers would be a last resort if everything else fails.


Yeah, I guess what I'm saying is: think of the state the world would have to be in for you to lose all your redundant backups except the paper ones.

In that world, would you really want this data back so badly that you're going to scan it all in from a giant pile of paper?

Again, I'm not saying that your data isn't actually that important to you, I'm just personally having a hard time imagining any data is or ever will be that important to me.


> In that world, would you really want this data back so badly that you're going to scan it all in from a giant pile of paper?

I can't answer what I would do if the world was in that state, but I'd rather have the ability to restore my important backup than not.

I might be wrong since I haven't done it yet, but creating the paper backup doesn't seem like a huge time investment.


Cryptocurrency private keys might count.


A solar flare won't destroy anything that isn't plugged into a power outlet. We will also be aware of it days before it hits, time enough to unplug anything and not even need to touch backups.

The disastrous consequences are all in the area of losing the power grid itself.


This is a good reason to backup to a cloud provider on the other side of the planet :).


I remember seeing the 1989 solar storm [1] in upstate New York. I was a little kid and thought aliens had come to Earth. I had never seen such things in the sky before - it was really beautiful.

When the power subsequently went out for two days, my parents got quite tired of me telling them it was an alien invasion.

[1] https://en.wikipedia.org/wiki/March_1989_geomagnetic_storm


Wait, what? You mean there's no way to store data in an external HD that's secure from solar storms under concrete or metal shielding? Or simply keep simultaneous cloud backups on more than one part of the world's surface? You could probably just find a cloud service that offers both shielding and multi-locational backups. For many people, even ordinary people, printing out even their personal data, never mind photos, would be absurdly difficult.


Never seen this before, very interesting!

But how much can one realistically print? 500KB per page isn't much for today's data hoarding standards.

I guess you don't keep images?


500KB/page at $0.02/page = $20.97/GB. HDD is $0.01-0.03/GB, SSD is $0.10-0.25/GB, BD/DVD are $0.02-0.10/GB, respectively, at retail prices and before counting redundancy. $20/GB was as far back for HDD as Oct 1999, which by the way fell further to around $7.50 next year in Oct 2000[1]. Yeah interesting but 10-100 fold density improvement would be nice...

1: https://mkomo.com/cost-per-gigabyte


25G BD-R can be had for 8 ct. However, the patent license royalties are at about 10 ct (I forgot if USD or EUR). Also, you can't retroactively pay the royalties when importing. Patents are bad. They would have developed BluRay without patent protection.


I do have a many GBs of blobs backed up both in external HDs and cloud, but my most important data (which is what I plan to keep redudant copies on paper) is just notes and source code which I keep in small private git repos.

BTW I still haven't backed up anything to paper yet, I bought a laser printer last year to follow that path, but ended up not doing it yet :/


Many years ago, I used to contemplate how much data you could store on a piece of paper with a laser printer. Like a QR code but optimized as much as possible. I only got as far as figuring out how to address a page as a bitmap at full resolution, because I'm not good at algorithms.

I was reading today Wikipedia about Turbo Codes, Viterbi encoding and similar things and it would be interesting to me to read how a very smart person would solve the problem of maximizing data stored on paper with reasonable error correction.


Github recently backed up a lot of open source projects on hardened microfilm and stored it in the Artic. See https://archiveprogram.github.com/


If only there were some other technique to print a large image file onto a page /s


Sure, encryption over image data and printing works fine for retrieving it at a later date.


I was referring to printing the image pixels rather than a serialization of the binary data. A quality printer can do 600dpi in full color - an A4 page can hold a very large image.


Are optical media vulnerable to radiation as well?


normal BR-R shouldn't.


What does BR-R stands for?


Presumably Blu-ray Disc recordable. (More commonly abbreviated to BD-R, rather than BR-R.)


I made a typo. You are very much correct.


Well,

Windows Defender actively interferes with security software development. I've noticed that lots of developers either disable Windows Defender or try to whitelist the development folders. Whitelisting doesn't always work... Windows Defender will block certain behaviors such as token stealing and in-memory attacks.

Also... software developers often have 'test-signing' enabled and all kinds of other security risks that are unique to software development.


True, it is often hard to secure these things when the ultimate goal is to produce a new binary, however there are obvious low-hanging fruit that I've always encountered regardless of the host OS used. The encouragement of using a read-write directory (I'm looking at Eclipse and Android Studio of all things, but the majority of development tools are guilty of these) are fully writable (especially executables) without administrative/superuser permissions. Seriously, you want Android SDK on a user-writable place by default? That's questionable at best and contrary to any sysadmins who are the ones to lock these executables to well-known places.


The days of multiple users logging into the same machine and each having a writable home directory are pretty much over. 2020 mostly consists of single human machines with a user account and a root account. All the private data is stored in the user account. There is nothing extra of value in the root account.

The 1980's security paradigm's no longer really work.


I do sometimes laugh at the naïvety of some here. Sorry, what? There is no value of actually securing systems here? It is frustrating to know that in real life, some developers are not really concerned about the security of their dev machines.

Try to install Android Studio (make sure to initialise it!) or Eclipse on your system and tell me where it put its executables. Go and indulge in the horror that your local adb binary can be replaced silently without your knowledge. When most mobile operating systems are designed to isolate components together, we collectively are hindering effective security by allowing these low-hanging fruit on our systems.

Sorry but often I just see countless developers that are not that concerned about security and then ask themselves why are users irresponsible about security.


So the ADB binary can be replaced silently by anything running as your user account?

But anything running as your user account can also steal your ssh keys and gain persistence via cron or .bash_profiles or any of a multitude of other methods. How does a writable ADB binary let an attacker do anything more than they could do before?


> But anything running as your user account can also steal your ssh keys and gain persistence via cron or .bash_profiles or any of a multitude of other methods. How does a writable ADB binary let an attacker do anything more than they could do before?

I have no answers on Linux sadly (but you can disable user-side cron so that you can ensure that cron entries are not editable), but on Windows you can lock down your system by not allowing to run executables from non-whitelisted directories (AppLock), so you can only execute programs from specific directories. Does not really prevent stupidity ("I accidentally run NotAVirus.exe from our build directory") but you frustrate attackers from running their own executable (especially that most drive-by attacks rely on the Download folder or Temporary folder being executable). Of course, you need to monitor your build directories for harmful executables but you significantly reduce the attackers' footprint into the system. Additionally SSH keys can be stored in a format where you need to have an active password in order to decrypt them.


The point you were replying to is: Who cares that you can replace the adb binary? Once you have code execution as my user, you have access to my browser data (banking, google developer accounts), my ssh keys, etc.

We need capability-based privilege separation on network-connected machines.


> Once you have code execution as my user

The question is "how"? If you are indeed targeted by state-level attackers, there should be more precautions because you are actively attacked. However, I have stated that "there are low-hanging fruits". I have written up a more extensive reply (https://news.ycombinator.com/item?id=25914440) but the short answer is that you can actually prevent code execution on arbitrary areas on Windows. Upon further research, I now know that there is an analogous equivalent (and maybe competition) for Linux: AppArmor and SELinux. However, I will still see someone replying "there is no solution to this mess" despite having a solution, just unused by developers. It is made harder by other developers who by their choice makes security-conscious developers' lives harder.

> We need capability-based privilege separation on network-connected machines.

Yes, you're correct on that. But abandoning reliable-but-unused security for shiny new thing seems wrong on so many levels.


There are lots of things that interfere with software development. A shocking amount of my work happens in VMs, or on a dedicated "I don't trust you" workstation that re-pxe boots itself nightly. It shouldn't have to be this way, but there's ample evidence that it needs to be.


most security researchers, have very poor practices in general

That's likely because, not unlike the rest of the software industry, a huge number of them are just in it for the money, and have the minimal knowledge required to do the job (i.e. find exploits, get paid).


Why hunt for bugs like a chump when you can target the infosec and product security folks with lazy opsec and download working PoCs right off their systems?


> In each of these cases, the researchers have followed a link on Twitter to a write-up hosted on blog.br0vvnn[.]io, and shortly thereafter, a malicious service was installed on the researcher’s system and an in-memory backdoor would begin beaconing to an actor-owned command and control server. At the time of these visits, the victim systems were running fully patched and up-to-date Windows 10 and Chrome browser versions. At this time we’re unable to confirm the mechanism of compromise, but we welcome any information others might have

Interesting. Call me naive, but couldn't someone just investigate on this right now? Like get the link, act as Chrome and debug the whole process from forwarding the link to the executable up to getting compromised?


It's possible the attacker only serves up the Chrome exploit to some selected group of visitors, rather than anyone who visits the site.


Alejandro Caceres (a bughunter that has been attacked) has discussed some additional details on the VS attack vector on https://www.theregister.com/2021/01/26/north_korea_targeted_....


and shortly thereafter, a malicious service was installed on the researcher’s system and an in-memory backdoor would begin beaconing to an actor-owned command and control server. At the time of these visits, the victim systems were running fully patched and up-to-date Windows 10 and Chrome browser versions.

I am almost willing to bet $$$ that they would be fine if they had JS disabled.

That also reminds me of something I remember reading about many years ago on an RE site: "You hear about all the bugs they fix in each update. You don't hear about all the new ones they introduced."


> I am almost willing to bet $$$ that they would be fine if they had JS disabled.

I am fully willing to bet $$$$ that they would be fine if they had air gapped their computer.

Unfortunately, disabling JS is only marginally more practical than completely foregoing access to the web; and it is only going to get worse as more sites and services rely on JS.


It really depends on what you do. Amazon.com, for example, works perfectly fine with javascript disabled. They do it because they care about selling to everyone and know that even people with old computers and old software buy things.

Web devs keep repeating to themselves that no one browses with JS off but you can't see the ones that have it off if all you use to survey is JS itself.


Gov.UK is built around an assumption users have no or broken JavaScript, based on some of their research into how people use the site. When running at scale, they found about 1 in 93 visitors didn't have working JavaScript. [1]

Mid startup hustle it makes sense for people to start ignoring this. But if you adopt a mindset of serving users, you'll quickly see why they went with this approach - progressive enhancement of the site based on the capabilities that work, with a fully functional experience extending from lynx and upwards. [2]

[1] https://gds.blog.gov.uk/2013/10/21/how-many-people-are-missi...

[2] https://technology.blog.gov.uk/2016/09/19/why-we-use-progres...


Probably the 1 was an unidentified robot...


“It’s just 1% of our users!”


The real solution, which is perfectly practical and realistic, is to install umatrix and put it into strict mode, then whitelist domains that you trust and use often.

Give it a week and you’ll find yourself very rarely having to make policy changes. We spend most of our time on just a handful of sites.



> disabling JS is only marginally more practical than completely foregoing access to the web

Have you tried it, white-listing the sites that actually need it? Works pretty well


I would love to be able to do that on iOS.


> Unfortunately, disabling JS is only marginally more practical

I've done it for years and it's fine if you are willing to put in 2 seconds of effort here and there, besides some redirects for online purchases it's mostly painless.

Also no one needs to be making bets here, they were seemingly hit by something that Google still can't identify, from the article:

> the researchers have followed a link on Twitter to a write-up hosted on blog.br0vvnn[.]io, and shortly thereafter, a malicious service was installed on the researcher’s system and an in-memory backdoor would begin beaconing to an actor-owned command and control server. At the time of these visits, the victim systems were running fully patched and up-to-date Windows 10 and Chrome browser versions.


>I am almost willing to bet $$$ that they would be fine if they had JS disabled.

The problem is that you eventually have to enable scripts when you encounter a site that refuses to work with scripts disabled. There are enough of such sites around that most people would instinctively enable scripts whenever they see a broken site, which nullifies any security benefit.


If anything, security researchers should not be "most people". They should know better than to let sites execute untrusted code.

Of course, I believe Google is complicit in this somewhat, since its own intrusive tracking needs JS, and so it encourages everyone to make sites that require it and users to leave it on by default.


(Disclosure: I am not in any way a security researcher)

> I am almost willing to bet $$$ that they would be fine if they had JS disabled

I would guess that's because you do and that's what you use as a proxy to feel safe about the unsafe web. All of us even slightly aware of the security problems of browsing the web have something (for myself I immediately thought, I should add those domain names to my PiHole, just to be safe), but that's the problem with asymmetry: it costs you a lot to defend against everything; it costs a determined attacker very little in a marginal cost sense to spread their efforts across threat vectors. You're pretty sure disabling JS makes things safe. Do you allow CSS? That's another vector. What about all those cool new HTML5 apis we have now? Heck, it's tough to get developers to think at all about accessibility, what are the odds the developers who have to support that under-appreciated field put in a ton of extra time to make sure the features are fuzz tested? Anyone tested what you can do with the subtitle track to an HTML5 video player?

I bought a copy of The Tangled Web a decade ago and never finished it because the whole thing was just too depressing to really come to grips with and instead I too have uBlock, a PiHole, some other Chrome extensions and a Mac as a sort of double-condom to make me feel safe without making me much safer. We are all the US military, happily stockpiling the wrong assets to fight the last war.


>I too have uBlock, ...

Did you mean “uBlock Origin”, from the original author of uBlock?

https://news.ycombinator.com/item?id=24534778


Yes. Good point about the clarification!


Take a look at the most recent stable release notes for Chrome. I don't have a full list of which ones are in the renderer processes vs which ones are in the browser process. That said, it seems like a good number would affect Chrome without Javascript

https://chromereleases.googleblog.com/2021/01/stable-channel...


>I am almost willing to bet $$$ that they would be fine if they had JS disabled.

The attack vector was spearphishing followed by convincing the target to compile a project, where the makefile contained malware. No JS involved.


From the blog post:

>In addition to targeting users via social engineering, we have also observed several cases where researchers have been compromised after visiting the actors’ blog.

Aka, they were compromised after simply visiting a malicious blog. This is why the blog post emphasizes the Chrome Vulnerability Program, and specifically mentions that the victim was running a fully patched Windows 10 machine and fully patched Chrome.

>At the time of these visits, the victim systems were running fully patched and up-to-date Windows 10 and Chrome browser versions.

RCE exploits on web browsers are typically written in JS. I would also bet $$$ that if they had JS disabled, they would not have been compromised.


The article notes that there appear to have been exploits from browsing to the attackers' blogs in addition to the malware-laden software project.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: