gridwatch hiccup.

Apologies if your viewing pleasure was interrupted, but for some time a certain West Coast university specialising in 'green' issues has been stressing the server with continuous data downloads, at about 17:00 they managed to find a download query that pushed the server into swap.

It took half an hour to shut down the web server and finally the mysql server.

And identify the source of the attack.

It has now been blocked from further malicious access and service is back to normal. Some data may have been lost.

Reply to
The Natural Philosopher
Loading thread data ...

IYCBA (be arsed) drop the compsci dept an email.

It may have been an honest student mistake. One of mine (at a previous college) took out the Radiotimes TV schedule website once because he'd got the polling interval a bit on the "short" side.

They were very understanding when I explained it was an honest mistake and not a malicious attack.

Reply to
Tim Watts

It gets more curious. They were running a timed script every 5 minutes obviously hand coded to download just the demand a month ago.

It shouldn't have been a problem.

No other massive request seem to have been made..

I am still hunting the logs.

Reply to
The Natural Philosopher

I once worked for a university. Students writing a web spider badly (IIRC) managed to clobber our DNS into submission (or at least slowness), and got us a irate phone call from another uni. IIRC it was a lab full of PCs.

Reply to
Chris Bartram

Was this deliberate, or just a monumental c*ck up? Brian

Reply to
Brian Gaff

I am beginning to think it was simply a coincidence.

I ran out of memory possibly due simply to too many things going on, and what suffered was a few download queries. They needed to use the disk, but the disk was busy swapping, so they never completed, but kept trying, slowing down the swap rate till the whole server slowed to snail pace.

Reply to
The Natural Philosopher

Your box is lacking load control software, maybe you should add some? With a web server its probably better to throw away some requests when you get a bit slow responding rather than to just hope the problem will go away. It has quite a good retry mechanism, the user will hit refresh a few seconds after the failed request.

(The exchanges we designed would load shed if it got that bad. You wouldn't be able to make calls at certain times if it didn't. That was one of the big problems with Unix when the stuff was designed and I haven't seen much in the way of fixes to it since (not that I have looked)).

Reply to
dennis

I am pondering the options: That is certainly one of them

Yes, Totally agree.

Its not hard to put a 'sorry, servers just too loaded to do this' response.

Question is to understand what 'being loaded' means.

Reply to
The Natural Philosopher

Run a low priority request and see how long it takes to respond, throw away requests at some value of too long.

Reply to
dennis

Apache running listening on localhost only. Nginx picking up the traffic and then proxying to Apache?

When apache response time gets slow, send an Nginx error page saying try later?

Darren

Reply to
D.M.Chapman

Another way limit the number of allowed connections to some value that the server can support. Doesn't have to be a number were the server will struggle anything lower that doesn't get hit (often/at all) would do. I think this is a native feature of apache.

Reply to
Dave Liquorice

MM. More a case of concurrent bulk data downloads.

TBH upping the RAM will cost, but not that much.

Reply to
The Natural Philosopher

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.