Lately I have been
learning & researching on the security testing for web-application.
Here is something that I
would like to share about an interesting topic in that arena.
Directory Traversing:
If you are a newbie
& want to get hands-on on something interesting in hacking/learning
security testing, then directory traversing attack would be an ideal start.
I would like to
co-relate this in real life situation similar to searching for the home where
your girl whom you love the most lives in. What do you do?
- Look up her Facebook profile for details like phone
number, address -> no clue?
- Then get to know her dad's name & look-up in public
directory-> no clue?
- Buy her friend 'Bournville' chocolates daily hoping to
get some clue on this girl -> her friend ate ate ate & dint reveal
but asked for 'Snickers' this time?
- Bunk your lab session to steal info from the college
records -> Have a Hulk like peon who doesn't allow you to enter the
admin room of the college & you can neither bribe him or take him for
granted as he is always angry ;)?
Well if nothing of the
above seems to be working can we think of something simple, why we don’t just
follow her back to home once the college bell rings - Bingo!!! But yes things
could get complicated here too. She may change buses of different routes &
then catch a tram and then walk in directions which are tough to remember. This
is it.
Web applications would
either be built with a solid foundation keeping security testing in mind or may be as
vulnerable as being able to crack in a single attempt for any hacker to access the
confidential information.
How do hackers gather
clues?
1. Robots.txt: A text file that contains list of disallowed
& allowed directories/files which could be accessed through
web-crawling.
Oh really!! So how will it
help the hacker?
If you give a thought
about why would someone want to disallow certain directories or files to be
accessible to the web-crawlers then this leads us to clue that there could be
some confidential data in them. Though using this txt file is not
mandatory to build any website but if it is used, and then it has to be placed
under the home directory like google.com/robots.txt and hence making it more easily accessible to the public.
2. File names: Maintaining
easily guessable file names is the mistake that most of the site owners do. For
example there is annual report of a large IT form which has the revenue details
& dividend sharing information for the last year that is being shown on the
site. Now, it would be foolish to store the file name as
report_09122011_market.pdf & maintain the format every consecutive
year. So how difficult it would be to guess the file name to hack & get the
information on future dividends & confidential corporate decisions which may eventually kill the reputation of the firm.
Tools that could be
used:
1. Crawlers: HTTrack is
one such tool that crawls for all the publicly accessible files. This tool is
very easy to use & depending on the size/complexity of the website it
downloads the contents. Having a peek on each file will give more information
on things that are utmost confidential or at least the clue to reach
them.
2. Google: We know to
use this tool inadvertently for any such things that our brain has lost hopes of - like the "Current petrol price in Bangalore" :).
And the smart guys use queries to get info that is worth million dollars:
Example searching with a
queries like: "site:hostname keywords-to-look-for" keywords could be
confidential, reports, revenue or client & so on.
The security aspects
(except the love story part) that are listed above are from my learning through
the book "Hacking for Dummies-Kevin Beaver". Though understanding
& learning about directory traversing is important so is it to know the
countermeasures required for making directory traversing not-so easily
attackable area for any hacker. I shall come up with the same in my next blog.
No comments:
Post a Comment