News Computer antivirus strategies in crisis

Discussion in 'Article Discussion' started by GreatOldOne, 4 Sep 2003.

  1. GreatOldOne

    GreatOldOne Wannabe Martian

    Joined:
    29 Jan 2002
    Posts:
    12,092
    Likes Received:
    112
    Also from New Scientist:

    The speed with which US law enforcers last week tracked down Jeffrey Lee Parson, one of the alleged culprits behind the destructive computer virus MSBlaster, was heralded as a great victory in the battle against computer crime. But an investigation into antivirus software shows that there is no cause for celebration. Antivirus specialists are fighting a losing battle against malicious code like viruses and worms, it concludes.

    The research, undertaken at Hewlett-Packard's labs in Bristol, UK, is the first to evaluate the effectiveness of antiviral software. It shows that the way we fight viruses is fundamentally flawed, because viruses spread faster than antivirus patches can be distributed. By the time the antivirus software catches up, the damage has already been done, says Hewlett-Packard researcher Matthew Williamson.


    More here
     
  2. Alaric

    Alaric code assassin

    Joined:
    3 Nov 2001
    Posts:
    2,881
    Likes Received:
    0
    ok, I really don't see the fuss about viruses!

    It is my opinion that it's worms that pose the biggest threat, and they can normally be split into two varieties (off the top of my head)
    1. Buffer overruns (smash the stack, whatever)
    There isn't really any excuse for these these days, the theory behind them has been known since at least 1988 and is well drilled in during any software engineering course.
    Recently there have been developments in compilers that may help to stop them in software products. Also things like W^X in openbsd will ensure they are stamped out. And it will be long over due.
    (I think IBM is also trialling network level protections, but that just seems like a stopgap to proper operating systems and software engineering practices)

    2. Stupid policies/scripting/macros esp in email clients
    Microsoft being a big culprit for this, feature creep without any general overview and controls put on it is never gonna be a good idea. This just needs some common sense it seems. Preventing the viruses triggered by these means from gaining too much control (how many people log in to windows machines in admin mode?) will also help.


    Of course user initated viruses will still exist, but that's PEBCAK and can surely be controlled and not epidemic some day?

    Of course our main problem is some of these things will be only introduced to the mainstream with TCPA.... :(


    Alaric.
     
  3. NiHiLiST

    NiHiLiST New-born car whore

    Joined:
    18 Aug 2001
    Posts:
    3,987
    Likes Received:
    6
    I think the real issue is software company's needing to constantly test their products even after distribution. With Windows; there are exploits being found all the time such as the RPC vulnerabilities targeted by lovsam (sp?). This was found a few months back and a patch was released which you'd think is a good thing.

    However what should be happening is that major players in the software industry such as Microsoft (there are lots of others too, I'm just using them as an example) need to be constantly looking for exploits in their software themselves. If they got a couple of coders working on trying to exploit security flaws in the software they created then they could be found before crackers find them and therefore patches can be much more widely distributed by the time someone codes a worm to exploit the problem.

    It's naive though to think that better coding will eliminate bugs such as this. Crackers will always be there trying to figure out ways to get around security and bugs will always be present due to the complexity of current software. What we need is more time spent fixing them before they can be heavily exploited.
     
  4. Alaric

    Alaric code assassin

    Joined:
    3 Nov 2001
    Posts:
    2,881
    Likes Received:
    0
    I'm all for a bit of proactive security, but statistics are against approaches like this. To take an example out of 'Security Engineering' by Ross Anderson (i'm paraphrasing, he is quite long winded):
    Suppose Win2k has a million bugs, each with a MTBF of 1 billion hours.
    Attacker can spend 1000hours testing each year...
    Defender spends 10 million hours testing each year, with full src and most skilled coders money can buy.
    The attacker finds one bug, the defenders 10,000 (and issue patches for them all) and there's still only a 1% chance that they found the attacker's bug.
    His number's not mine, i think there is an element of human behaviour that needs to be factored in, like going for the obvious bugs, but statistics still seem to be on the attackers side.

    The reason compilers that catch overruns are useful is that they take out a whole class of bug, and so can partially beat the statistics in that way.

    However, no large software will every be bug free and there'll always be a case for a certain amount of auditing, formal verification and so on... especially in critical systems. Just look at all the electronic voting dodgyness going around.

    There is also problems with releasing all these patches and so on, especially when computer hardware is frequently very reliable these days. Whilst we may always be on the bleeding edge of technology, the people running old hardware and old software which is no longer supported by the manufacturer are always going to be vulnerable. As are the less skilled users who use the pc as a casual tool, like the car except that it's not regarded as so lethal and you don't need a license.

    There's no solution for security, you can only do your best to manage it.


    Alaric.
     
Tags: Add Tags

Share This Page