Hacker News new | past | comments | ask | show | jobs | submit login
You can't download this image (youcantdownloadthisimage.online)
246 points by calmingsolitude on Nov 27, 2021 | hide | past | favorite | 223 comments



In Chrome, you can just do as the author says, right click and "Save Image As".

Then just go to the folder where it is being downloaded, and copy/paste the file "lisa.jpeg.crdownload" to "lisa.jpeg.crdownload copy".

Rename to "lisa.jpeg" and cancel the download. You now have the image. What's interesting is that you ARE actually downloading this image. It's just that they don't terminate the connection.


We have a security proxy at work that gives you the bits, but then holds the connection open while it does a scan, then resets the connection if it doesn't like something inside. Both Chrome and Firefox [haven't tried IE/Edge, but I assume that they'll do something that the proxy vendor would want] infer [or are told?] that the connection broke and delete the interim file. Unfortunately, with zip files, the header is at the end; so it can't do scanning until the whole file is down.

For me, the easiest way to mitigate it turned out to be to use wget [with an appropriate user-agent... say, the same as my desktop browser]. wget Gets the bits, but doesn't in any way molest the "partial" download when the connection resets. Then it tries to download the rest using the "Range" HTTP header, and the server says "oh, dude, you already got the whole thing"; wget declares success, and all the bits are in my download folder.

I believe that we pay, like, a lot for this proxy, which is annoying on two counts: 1) If I can get past it trivially, then presumably competent attackers can, too, and 2) Sometimes it takes a dislike to legitimate stuff, which is how I was forced to learn how to get around it.


Those controls on a proxy are to protect against the careless and the clueless. No competent security team will rely on them to prevent ingress/egress of data or malicious code by skilled individuals.


Correct - this is an attack on the other side of the airtight hatchway (i.e., you must persuade the user to run wget in a certain fashion and run the resulting exe, and if you don't need to persuade the user you could have done something simpler).

https://devblogs.microsoft.com/oldnewthing/20170130-00/?p=95...


I am continuously appalled at the gall of calling that hatchway "airtight".


That's not how these proxies usually work. They only give you enough bytes so the browser doesn't feel stuck while downloading everything and scanning it. The download then suddenly continues at 100 or even 1000 Mbit/s.


Indeed that's what I've experienced in the past. But I'm telling you this, for sure, based on my experience downloading a 200M tensorflow wheel the other day.


I just dragged and dropped it to my desktop. This was on macOS, dunno if Windows would allow that.


It downloaded as normal on iOS.


Me too


When I did that, macos dropped it as a .webloc rather than an image.


I don't understand what this website is supposed to be demonstrating. Some sort of genius version of disabling right click I suppose. But I did download the image, because its contents were transferred to my computer's memory and displayed on my screen. I can see it clear as day.

If Web 3 is just willfully misunderstanding how computers work, I don't see a very bright future for it.


Whatever your browser shows, is shown from cache. So the picture should be in your cache, too.


(Most) browsers actually start displaying an image before it's fully downloaded. In fact, many image formats/renderers are specifically designed with this property in mind, like jpeg which will render progressively less blurry versions of an image as the browser receives progressively higher-frequency components of the Fourier transform.

While the bytes are there temporarily, just like with all the other methods discussed, chrome at least eventually give up on downloading the "whole" image and displays a broken image sign in place of the Mona Lisa (and presumably prevents it from being cached and deletes what was there)


it would be interesting if the download stopped after the second to last progressive layer but before the last byte; then the .crdownload renaming workaround wouldn't work.


i paused the download and renamed the file to .jpeg and it worked similarly


Or, as an alternative, use wget and then press Ctrl+C after two seconds. Viola, you have a usable lisa.jpg.


I did something similar on Firefox. But the image wasn't completely downloaded. Half of it was green.


In this case, F10 and basic Gimp cropping Skills also do the job.


The problem with leaving connections open is that there's a limit on how many you can have on the server... I think the author has committed self-DoS :)

https://en.wikipedia.org/wiki/Slowloris_(computer_security)


> The connection has timed out

Now I really can't download the image


He got you!


It's a win-win!


And now you can't download that image.


Yeah it's like a breeder reactor, it makes its own fuel.


It would be possible to really close the connection but hack something to don't inform the client. (maybe just doing close() with SO_LINGER=0 and dropping outgoing RST in iptables would be enough)


The client would eventually time out though, right?


Yes, browsers probably have their own relatively short timeouts. (curiously enough, system TCP stack will never close idle connections by default, and even if application requests SO_KEEPALIVE, default intervals are usually in hours range)


The website is down now lol


It should be ends with .offline


Yep. Although with the right language, even on cheap hardware, that limit might be 1,000 or so.


1000… pft.. just holding open a connection and sending on average a few bytes a second hardly costs anything and the memory requirements on eg Linux are minimal. You can easily do 100k or more with python and a few hundred megs of memory. Millions are doable with something a little less memory hungry or throwing more memory at it.


Most programmers these days don't know what computers are capable of.


if you aren't using 14 layers of abstraction you clearly aren't a real programmer /s


In fairness to them, a lot of programmers didn't come up the way (we presumably) did - if you started using computers/programming in the 80's and building computers in the 90's your worldview is going to be fundamentally different to someone who started in 2018.

We came from a world where bytes mattered they come from a world where Gigabytes matter.

In some ways caring about that stuff can be detrimental, at the back of my mind there is always that little niggle - you could do this in 1/10th the runtime/memory cost but it'll take twice as long to write and you'll be the only one who understands it.

These days we don't optimise for the machine but instead for the human time and honestly, that's an acceptable trade off in many (but not all) cases.

It can be frustrating when you remember how much of an upgrade getting a 286 was over what you had, that I now routinely throw thousands of those (in equivalence) at a problem inefficiently and still get it done in under a second.


When you usually try to download an image, your browser opens a connection to the server and sends a GET request asking for the image.

I'm not a web designer, but that seems rather ass-backwards. I'm already looking at the image, therefore the image is already residing either in my cache or in my RAM. Why it is downloaded a second time instead of just being copied onto my drive?


Oh no, it's still downloading the one it's displaying on screen. You can even see a spinny thing as the icon of the tab on Chrome.

The format allows for showing images when they are partially downloaded, and also allows pushing data that doesn't actually change the image.


Okay? So we still seem to have an accurate representation of the image we want. Why can't I just download that and what's the point of the rest of the data. If we already are seeing the image, the rest of the data is pointless no?


Certainly so, yes. But your browser doesn't know that.


but the browser doesn't know that the image is already done, and since there's still data coming in, the browser is obliged to continue downloading.

you could right click, and copy image, rather than save as. It achieves what you wanted - save a copy of the image.


You can totally "download" the image in your RAM by right clicking / long pressing -> "copy image" or equivalent in most browsers. It's just not going to be a byte by byte identical file, and may be in a different format, e.g. you get a public.tiff on the clipboard when you copy an image from Chrome or Safari on macOS, even if the source image is an image/svg+xml.


That's the first thing I tried, "copy image" then, on gimp, file->create->from clipboard.

And it just worked, with no hassle.


As far as I remember from a previous project from a few years ago, the browser doesn't include a referrer for the download request, which can be used for a distinction. (You'll have to disable caching and E-Tags for this to work.)

However, this is easily defeated by the use of the console: Select the sources tab, locate the image and simply drag-and-drop the image from there, which will use the local cache instance for the source. Works also with this site, at least with Safari.


> [...] which will use the local cache instance for the source

I don't understand why browsers aren't always doing this. They already have the image, why redownload it?


I guess, this is for historical reasons. Mind that there is no such thing as a single, cached image. There's the downloaded content, a decoded bitmap derived from this, a buffer for any instance of the image, which may be clipped or distorted (and may have local color management applied, e.g., converted to 8-bit color range). (At least, it used to be that way. I faintly remember that this used to be a 4-step process.) When memory wasn't ample, any of these, but the instance buffer(s), may have been purged, and an instance buffer doesn't represent the original image anymore. So it makes sense to get a new clean image in the original encoding.


> They already have the image, why redownload it?

They don’t already have the image. They have part of the image. Because the connection hasn’t closed, as far as the browser is concerned, it’s still in the process of downloading it.


> When you usually try to download an image, your browser opens a connection to the server and sends a GET request asking for the image.

I can't vouch for chromium-*, but my Firefox does NOT do that. I've just tested it.


I have problem understanding what problem is this solving?

When the image is on my screen I can just screenshot it.

This is a common problem, using something in insecure environment, thats why companies are going into such extents to encrypt movies on whole train from source to the display and even those are regularly dumped.


It's not "solving" anything, just demonstrating an interesting gimmick


What’s the gimmick because I just save that image to photos on iOS?


Definitely a gimmick. Interesting might be a bit of a stretch


And even if they figured out some DRM method to prevent screenshotting/screen recording, I can still point my phone camera at my monitor and capture it that way, if I really want to. There is always a way around whatever they try to do.

If I can see it, I can make a copy of it.


> I can still point my phone camera at my monitor and capture it that way

Back in the late 1990s/early 2000s (this was so long ago that I cannot quickly find a reference), there were proposals to require all non-professional audio and video recorders to detect a watermark and disable recording when one was found. Needless to say this was a terrible idea, for several reasons.


But because they try the rest of us suffer the consequences of more expensive and slower hardware and all kinds of other problems.


Yes. DRM always hurts the legitimate users more than the "pirates". Same with disabling right click or otherwise trying to prevent downloading images.


I don't know about browser internals, but I would guess that the browser decodes the image once into a format that can be shown on the page (so from PNG/JPG/WEBP into a RGBA buffer) and then discards the original file. This saves a bit of memory in 99.99% of cases when the image is not immediately saved afterwards.


More likely the original file is saved in the browser cache. That's why it loads faster when you reload the page, and slower when you do a full reload by holding down shift. In Firefox you can see the files with about:cache, and find them in ~/.cache/mozilla/firefox/e1wkkyx3.default/cache2/entries/ or similar (they have weird names with no extension, but the file command will identify them, in their original format). In Chrome they're packed into files with metadata like the URL at the start. You can extract the original file by looking at a file in the cache folder [1] and snipping the header off (you can guess where it is by looking at the file contents with xxd or a hex editor).

More info (and link to a Windows viewer tool) here: https://stackoverflow.com/questions/6133490/how-can-i-read-c...

[1] For me on Linux, Chrome's is ~/.cache/google-chrome/Default/Cache/


Interesting if that is the explanation. I wonder if any browsers offer a "privacy mode" where the original images are saved, thereby preventing the server from knowing which specific images you chose to save and were therefore interested in. I wonder how often that information is logged, and whether those logs, if they exist, have ever been put to a purpose such as in a court case.


I'm pretty sure it only discards the original after x number of other (new) images have been decoded. (Or perhaps it's memory footprint based?)

I ran into a Chrome performance bug years ago with animations, because the animation had more frames than the decoded cache size. Everything ground to a halt on the machine when it happened. Meanwhile older unoptimized browsers ran it just fine.


One cool related thing is that (I believe) modern graphics cards (even Intel) can store and use JPG blocks directly from GPU memory, so it's not necessarily beneficial in the long term to convert to RGBA in advance. Though I think no modern browser actually does this, especially given how power-cheap decoding jpeg (with SIMD) already is and how likely it is that gpu bugs would interfere.


I don't think they can use jpg directly, that would be a waste of transistors given that the graphics world use other compression formats like etc1, bc, astc and so on.

It is however perfectly possible to decode blocks of JPG on a GPU by using shader code.


I'm pretty sure that Safari (and probably most browsers) on MacOS renders JPEGs via CoreImage, and I have seen hints that CoreImage has various GPU-accelerated pathways, though I don't know whether those include DCT or JFIF on the GPU.


This used to be common behavior, but changed over time in most browsers.

Your guess is as good as mine as to why.


There's another way to achieve this in a more malicious way. Granted I haven't tried it in years, but it was possible back in 2017 when I tested it.

The idea is to fake the image that's being displayed in the IMG element by forcing it to show a `background-image` using `height: 0;` and `padding-top`.

In theory, you could make an IMG element show a photo of puppies and if the person chose to Right-click > Save Image As then instead of the dog photo it could be something else.

For some reason I can't Oauth into Codepen so for now I can't recreate it publicly.



Never gonna give you up...


You could also just do like we did for years and check the refer for the image request, and if it wasn't your web server you redirect the file to whatever you want, the end user has know what of knowing. and because the trick is done on the server side then viewing your source won't get around it.

This is the same method used to prevent hot linking to images back in the day.


Modern browsers suppress the referrer. Relying on it for functionality is not a good idea.


Fair point, you can accomplish the same by comparing the ip adress that the image request came from against your servers.


Wouldn't that just mean comparing the user's public address? It is the browser that is trying to download the image from your servers.


The shortest route yes, but I'd rather whitelist check, because depending on your infra, there might be a lot more things that make request for the content.

But the concept is the same, server side check the ip of the request, and take action based on that check.


Not very new, the technique's probably been around since the 2000's... e.g. you can't right click, save as on the web version of Instagram because all the images are background-images attached to DIVs. In the "old days" there'd be a 1x1 transparent GIF above the image, so any downloader would download that instead.


More like 1990s, but yes.


What you and I describe isn't new, except you're talking about the use with DIV elements which don't have a "Save image as" menu item on right-click.

IMO, browsers should remove `background-image` support from IMG elements for that reason.


This does create a self inflicted Slowloris attack on the server hosting the image, so this site is probably more susceptible to the hug of death than most


It always baffled me browsers even try to download an image (or a page or whatever) I asked them to save despite fact they have already downloaded and displayed it. What I would want them to do instead is just dump it from the memory.

And this sounds particularly important in case it's about a web page which has been altered in runtime by JavaScript - I want the actual DOM dumped so I can then loaded it to display exactly what I see now.


I've always thought the same. The data is there, why go through the trouble of downloading it again?


If there was a standard checksum request within HTTP, sure. Otherwise you're going to break some workflows with this kind of aggressive caching. Maybe it should be an opt-in setting (and maybe it already is).


but the data is not there! it's just displaying a partially-loaded image.


Does it matter if you want a bit for bit copy of what's on the screen?



Add -N, --no-buffer Disables the buffering of the output stream. In normal work situations, curl will use a standard buffered output stream that will have the effect that it will output the data in chunks, not necessarily exactly when the data arrives. Using this option will disable that buffering.

and it works


Results in an empty file.


Increase the time a bit, it looks like sometimes it takes more time to download curl --max-time 2 https://youcantdownloadthisimage.online/lisa.jpg > lisa.jpg


Nevermind, looks like MobaXterm shell provides a non-standard curl implementation:

$ which curl

curl: aliased to _tob curl

After installing curl with apt-get it works.


I hate it when people do that. You can wonder for hours why something obvious doesn't work as it should and in the end discover someone decided to implement something substandard, often for no good reason.



that's every distro and *nix derivation


Well, Windows too. I recently had to set up something simple on a Windows 10 machine, I quickly checked by tab-completion if a python binary is available so I copied by setup script only to discover someone smart decided to redirect the binary to the Windows Store. Yes, I know the rationale behind this, but still. Just like hijacking nxdomain.


And powershell!


It downloaded on Safari on iOS. Long press on the image and tap Add to photos.


Same for me, but the webpage gave the impression that it was still downloading, because after it download completely, at least in firefox on iPhone, it’s still showing that it was downloading.


I could copy the image from Firefox. Are you sure you downloaded it instead of copying it?


Ditto


This is a perfect (if maybe unintentional) example of how to get help from otherwise disinterested technical folk: Make an obviously technically-incorrect claim as fact, and watch as an entire army comes out of the woodwork giving you technical evaluations :)


Cunningham's Law [1]: "the best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer".

[1]: https://meta.m.wikimedia.org/wiki/Cunningham%27s_Law


Though note that Cunningham disavows the law attributed to him:

> Cunningham himself denies ownership of the law, calling it a "misquote that disproves itself by propagating through the internet."

https://en.m.wikipedia.org/wiki/Ward_Cunningham


That does sound like the right answer, posted in response to the wrong answer.


His opinion on this matter is not of any importance, as confirmed by a great many people who have found an unlikely fame. Just ask mrs. Streisand.


I've tried that a gazillion times over the years. It works like a charm.


With Google search so broken sometimes the only way to discover things is through nerd baiting.


People hate DRM. Thus everyone will work their hardest to bypass it.


I’m aware of this phenomenon, but have never tested it (confidently posting something incorrect to get responses with the real answer). Has anyone here actually tried this? How did it work?


Anthony Bourdain used to find the best local cuisine by going onto message boards (anonymously I assume) and saying X is the best restaurant, only to receive a flood of recommendations

https://archive.md/0UQsd: Ctrl + F for "nerd fury" to find where the claim starts


Downloaded on my iPhone with a single tap.


Downloaded on my mac with two clicks (FF): open in new tab, download


Worked on Safari (Mac) too by dragging and dropping into my downloads.


Yes, just copy or save to gallery and it’s done…


I was about to "Save as..." when suddenly it struck me that this would be an incredible bait to spread a virus.


Too late. You already download an image into your cache when you view it.


An image virus? Please do elaborate.


"Buffer Overrun in JPEG Processing (GDI+) Could Allow Code Execution (833987)" [0]

[0] https://docs.microsoft.com/en-us/security-updates/SecurityBu...


It's bizarre to claim image viruses exist today when you link to a nearly 20 year old article about a buggy OS.


If it happened in the worlds most used client OS 20 years ago, it's clearly not impossible it can happen again. Not that much has happened with computers since then.

I remember being like you, believing no virus could come from an image or other data. I have been proven wrong enough times since then. We keep assuming things as programmers and sometimes we get it wrong and then there is a new wulnerability.


there was a more recent case with image metadata parsing that, when the author tried to report it, the image also broke youtrack


using "filename" within the "Content-Disposition" header, you could theoretically trick a user into downloading a non-image file despite the url containing lisa.jpg

I think certain browsers have security limits on the file-extensions you download, which may include when image->"save as" is used.


Don't forget that you can literally concatenate jpegs and zipfiles [header at start of jpeg, but at end of zipfile], so the valid jpeg can also be a valid zipfile.

Combine that with something like Safari's insistence at automatically exploding zipfiles on download, and you got yourself a party.


screenshotted it instead


You really can't - the HN hug of death has killed it!


The image was dead in the first place, hence it cannot be downloaded or opened.

That’s the joke, i guess.


No, they DoS'd themselves with their "viewable but not save-as-able" technique. Leaving connections open will do that. The image is visible right now but the browser can't save what appears to be an incomplete file.


Graceful nongradation


You can't download the code on github either.

Because github is currently down.


rare occurrence I imagine but good check to not have everything in one place


In Chromium based browsers the quickest method I've found is "right click -> Inspect" the image then click the sources tab in the dev tools window. From here you can drag or save the image shown without issue. My guess as to why this works is the sources view seems to pull from the loaded content of the page rather than fetch the content based on the lack of packets trying this with a packet capture running.


In Firefox, beside that, you can press Ctrl + I, open the "Media" tab, and pick any of the graphics that were already downloaded to display the page. Then you can save the picture(s) you're interested in. I suppose the source of it is the local cache.

Does not work in this particular case, of course, because the whole image is not yet in the cache.


It also works in this case, too: at some point the connection does close (if it doesn't, just hit escape) and you can save the image as usual, now from the cache.


Right click copy is faster.


It copies bits shown on the screen, so it loses e.g. EXIF data, if any, the original color space, etc.


On iOS, long press > add to photos

I now have a photo of the Mona Lisa in my camera roll.

I guess this is one of those things that wouldn’t be as edgy with the actual mechanism stated. :)


No issue downloading it on iOS.


Same. Oddly, the page itself remained in a loading state even after downloading succeeded.


Great! Just what we need these days: more tricks to screw around with the simple, straightforward implementation of the HTTP protocol! And just in time for Christmas.


Firefox on Android long press save image no other action taken and it shows up in my device photo gallery.

(edit: clarity)


If I wanted a non-downloadable image I would make it from 1px wide/tall colored divs.


I thought this is what it was going to be! Another method would be to generate a plane with the same number of vertices as pixels, store the pixel color values as an attribute, and then render the mesh to a canvas.


You can right-click canvas and save it as image.


Oh, you're right! I guess you'd have to disable the context menu too.


Which doesn't help either because in the Inspect view you can just click "Screenshot node" on the HTML element.


I'm learning a lot today.


Pretty sure that was actually used in emails at some point, just with tables, to get around email clients not loading images.


Email clients generally don't load external images. The majority should still display images that are sent as part of a multipart/mixed message though, and those should take up significantly less space than thousands of divs/tds and color attributes.


I actually used this to generate graphs in JS/HTML in the 1990s. :-)


Out of curiosity, how was the performance (of course normalized to performance of that era)?


Here's a somewhat older approach splitting charts into linear runs of 1x1 images, which has some statistics at the bottom of each chart:

https://www.masswerk.at/demospace/relayWeb_en/chartset.htm

(Or see https://www.masswerk.at/demospace/relayWeb_en/welcome.htm and select "charts". Total time for calculations and rendering was then in the about 1 sec range. The real problem for using this in production was that these charts could be printed on Windows with Postscript printers only. I think, this was eventually fixed in Windows 98 SE.)


In Chrome, Right-Click on Image → Inspect → Right-Click on <img src="lisa.jpg" alt="Mona Lisa"> Tag → Capture node screenshot → Save


Safari Mac, I dragged it out of the page and into a Finder window, and it saved.


Yeah, I just:

1) used the “copy image” function Safari on iOS.

2) took a screenshot.

… back to the drawing board NFT bros.


I right-clicked and pressed Open Image in a New Tab and then pressed Escape to disconnect the browser from the server. No infinite download here.


This sure seems like a weakness of the so-called "modern" web browser. Simpler, safer clients and proxies have no trouble dealing with a server that is (deliberately) too slow.

For example,

curl

    curl -y3 -4o 1.jpg https://youcantdownloadthisimage.online/lisa.jpg
tnftp

    ftp -q3 -4o 1.jpg https://youcantdownloadthisimage.online/lisa.jpg
links

    xy(){ tmux send "$@" ;};
    xy "links https://youcantdownloadthisimage.online/lisa.jpg";
    xy Enter;
    sleep 2;
    xy s;
    xy Enter;
    tmux capture -t3 -p|grep -q Overwrite && 
    xy o;
    sleep 1;
    xy a;
    xy q;
    xy y;
haproxy

    frontend bs 
    bind ipv4@127.0.0.1:80
    use_backend bs if { base_reg youcantdownloadthisimage.online/lisa.jpg }

    backend bs
    timeout server 420ms
    server bs ipv4@137.135.98.207:443 ssl force-tlsv13 ca-file /etc/ssl/certs/ca-certificates.crt verify required


netcat w/stunnel

   cat << eof > 1.cfg
   [ x ]
   accept=127.0.0.255:80 
   client=yes
   connect=137.135.98.207:443
   options=NO_TICKET
   options=NO_RENEGOTIATION
   renegotiation=no
   sni=
   sslVersion=TLSv1.3
   eof
   stunnel 1.cfg
 
   printf 'GET /lisa.jpg HTTP/1.0\r\nHost: youcantdownloadthisimage.online\r\nAccept-Encoding: gzip\r\n\r\n' \
   |nc -w1 -vv 127.255 80 |jpgx > 1.jpg
openssl

   printf 'GET /lisa.jpg HTTP/1.0\r\nHost: youcantdownloadthisimage.online\r\nAccept-Encoding: gzip\r\n\r\n' \
   |timeout 3 openssl s_client -tls1_3 -connect 137.135.98.207:443 -ign_eof|jpgx  > 1.jpg
jpgx (custom filter: extract JPG from stdin; foremost will not work for this image, see byte 8114, etc.)

    sed '1,3s/^ */ /;4,18s/^ *//' << eof > jpgx.l
    int fileno(FILE *);
    #define jmp (yy_start) = 1 + 2 *
    #define echo do {if(fwrite(yytext,(size_t)yyleng,1,yyout)){}}while(0)
   xa "\xff\xd8"    
   xb "\xff\xd9"    
   %s xa 
   %option noyywrap noinput nounput
   %%
   {xa} putchar(255);putchar(216);jmp xa;
   <xa>{xb} echo;yyterminate();
   <xa>.|\n echo;
   .|\n
   %%
   int main(){ yylex();exit(0);}
   eof
   
   flex -8iCrf jpgx.l;
   cc -std=c89 -Wall -pedantic -I. -pipe lex.yy.c -static -o jpgx;


right click > copy image > paste somewhere

Works for me :) (I pasted in Telegram FYI)


On Google Pixel there is a new feature where I can go to the recent app screen and it defects images to click on them to do Google lense or save images or share image. I was able to save the image of size 506kb with 841x1252 1.1MP pic.


Drag and drop to Desktop on macOS works too.


copy the not finished download file in your downloads folder (for me lisa.jpg.crdownload) and name it lisa.jpg


Just wrote the same. Didn't see your comment early. So really, you can absolutely download this image!


Hm opened chrome console and saved it from sources there, took 30 secs :)


It's definitely hard to download an image that doesn't load. :(


The image saves immediately on an iPhone


Works fine with wget it just keeps hanging but if you CTRL+C it and open the file it'll look fine.

The trick is to have nginx never timeout and just indefinitely hang after the image is sent. The browser renders whatever image data it has received as soon as possible even though the request is never finished. However, when saving the image the browser never finalizes writing to the temp file so it thinks there is more data coming and never renames the temp file to the final file name.


My usual way of downloading images is to click and drag the image into my downloads folder on my Mac. Worked fine for me from Safari. Am I missing something?


Load the website in Firefox with the Network Panel open, hit "Escape", and right-click "lisa.jpg" -> "Save Image As"


Firefox mobile did hang when trying to download, but after pressing cancel the image was downloaded and viewable in my gallery app.


Same here


The site does not send a Content-Type header for the main web page, so I get a download dialog when trying to open it.


Aside from all the folks who can download the image one way or another, I'm pretty disappointed that the technique here is simply using a web-server that doesn't work like clients expect. People have broken links or incorrect redirects all the time, but we don't generally make a fuss over them.


Other methods have been posted, but I wanted to share mine. Mac needed:

1. Secondary click image → "Copy Image"

2. Open Preview

3. File → New from Clipboard

4. Save image


Yeah, I couldn't figure what the fuzz is about at first, as I simply right-clicked, copied and pasted into mspaint. I rarely need to save an image, more often than not I just paste it into some other application.


An interesting workaround for Android 12 users: go to the app switcher and there will be a badge over the image which you can click to get "copy", "share" and "save" buttons. Save it from that panel and it works just fine.


Downloading the image worked just fine on an iPad. So not sure what they are talking about.


Another idea is canvas: https://jsfiddle.net/dvg45pcz/

But I don't know how to get it to not appear in network sources.

Or wasm but I don't know how to write that.


You could likely pack.and unpack from websockets...


that's a great idea. * i also thought could just load a jsonp file with the string in it but no matter what, maybe can't get around it, when i put the string as image.src = '' it puts it into chrome network under media


Does WebRTC show in the network console?


that's a great idea. found this sounds like chrome has a not great way https://stackoverflow.com/questions/17530197/how-to-do-netwo...


No one seems to mention that Chrome keeps spinning on the HTML load as well and eventually kills the image. This means the webpage itself is broken and fails to work. Not just the download. Soo.. this just does not work for anything..


Looks at image.

Looks at prntscrn key.

This is basically a carefully targeted reverse slow lorris and involves right clicking an image why do I fear that use case and that level of madcap solution will all lead back to NFT bros...


I chuckled about this. However you can drag and drop it to your Desktop on macOS.


This one is pretty easy but a friend recently showed me one (gallery of some sort) I couldn't figure out quickly which was downloading chunks in nonstandard ways and piecing them together with uglified js.


Somehow right clicking + saving worked fine on Safari (desktop). I tried it a couple of times and it worked in all cases; sometimes it took a second, sometimes more. Perhaps the server dropped the connection?


On webkit based browsers at least you can just drag the image out, it doesn’t bother trying to redownload it just reconstructs the image file from memory, this also applies to copy/paste on ios


There's a multitude of ways to workaround this hack. You can easily grab the screen area via the OS if need be. Seems pointless to try to restrict access if it's viewable in a browser.


Koan of the day: Can you download something that doesn't load?


Just made a screenshot


I would have expected this to do something different, like rendering the image via WebGL (so it looks like an <img>, but isn't easily downloadable).


  $ wget https://youcantdownloadthisimage.online/lisa.jpg
wait for like 5 seconds for it to finish downloading and then hit ctrl-c


I right clicked (on a Mac), clicked "copy image" and I pasted that into preview just fine.

Is there some reason why that's an uninteresting exception?


Using Safari on iOS. I was able to save it instantly…


Likewise, using Firefox on iOS.


It worked fine on iOS (confirmed in my photo library)


https://imgur.com/a/60ArLwg – here you go!


On iOS: tap hold the image, save to photos. Done.


wget and aria2c both works. I get a jpg image 54,8 KiB, SHA256 sum 204788602166C017B8FEF5D63EDFD814DC9865233C410BCDAD713F78DAE5AF18


I thought it was going to be some obfuscated JS or maybe the image reconstructed from a grid of subimages or some such thing.


what, sure if initiating the save as.. triggers this endless download thing

but the initial load is the image and opening up dev tools and finding it in the sources/cache and saving it from there, chrome knows it's 56.1kb or whatever and just saves it out of cache, done.

Interesting but what was the point they're trying to make?


Went to the download folder, renamed lisa.jpg.crdownload to lisa.jpg. Cancelled the download in the browser.


Rightclick and select "copy image". Why would you want this if you can copy the image anyway?


I just right-clicked the image, selected 'copy image', opened Gimp and pasted as new image.


Hm, on Safari mobile, hold down on image and “Add to Photos”, downloads fast and fine. How come?


I just simply long tapped on the image and tapped save to photos on my iPhone and it was saved.


How to download this image:

1. Open Inspect (right click and hit "inspect")

2. Click the "Network" tab

3. Refresh the page (while clearing the cache Command+Shift+R)

4. Right click on "lisa.jpg" in the list view under the "Network" tab

5. Click "Open in new tab"

6. Right click the image on the new tab

7. Click "Save image as"

Man I can't believe these clowns (or myself for typing all this out--don't know who is worse)


Did you even try this before posting? These steps are no different than just right-clicking the image and choosing "Save image as". It still results in a download that never finishes.


Inspect > Copy > Image Data-URL works perfectly fine in Firefox.


Did you even read the page? There's no reason to think that this approach would work.


What actually works: take a snapshot of the element via the Elements panel.


I had zero issues downloading the image with brave. Saves normally like any other picture.


iOS Safari saved the image in my photos, as any regular picture that I do a long tap on.


I was using the Tor browser from my Android and had no problem downloading the image.


I just right-clicked, copied the image and pasted into an image editor in windows.


Copy and Pastes works fine.


if I open the image on a new tab, after 1.5 minutes the "content download" was ready. Also, had no problem right clicking it and hitting copy image.


I guess this is very similar to res.end() in nodejs servers


Yes I could. No issues. Save to photos on iPhone.


You can on iOS safari. No hacks/workarounds


iPhone Safari - Instant Download, no problem!


iPhone > long press > Add to photos

What am I missing?


I posted the same snarky comment too. Seems the headline should be “You can’t download this exact image, but you can copy the presentation image via other means.”

More of a play on words for how copy and download often times mean the same thing even though technically they’re different.


If you wait long enough it downloads.


Downloaded on iPhone


`prt sc` anyone?


99% sure it said download, not screenshot


works fine on ios





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: