ouch that hurt my head main>WW2>United States>50 states (not sure if thats a list you can go and check) the title is US state> commonwealth of pennsylvania>highway>US route 30>US Route 30 in Pennsylvania> PA route 113 thats uhh 8 clicks xD i probably couldve shaved it to 6-7 but oh well next page is Ontario Health Insurance Plan
Main Page -> Vietnam War -> Canada -> Medicare (Canada) (From the link Universal health care) -> Ontario Health Insurance Plan Next up Promethium
WWII > Bomber > Bomb > Teller-Ulam design > Hydrogen > Promethium 5 clicks Fort Gibson National Cemetery
Dartmouth > Appalachian Trail > Tennessee > Memphis, Tennessee > United States National Cemetery > Fort Gibson National Cemetery 6 clicks PGA Tour of Australasia
Main Page > United States > Golf > PGA Tour > Official World Golf Rankings > PGA Tour of Australasia Find Taeke Taekema
2008 Summer Olympics > 2004 Summer Olympics > Netherlands at the 2004 Summer Olympics > Taeke Takima 4 clicks! Find Garapa... He he he.
Mushroom > Super Mushroom > Platform Game > Console role-playing game > Ultima (video game series) > Elder Scrolls III: Morrowind > XnGine 6 Clicks Find Barb Wire. The superhero, not the wire
Wikipedia> English Wikipedia> United States> U.S. State> State of Maryland> Rockville> Bethesda Softworks> Terminator: Future Shock> XnGine 9 (lots) of clicks Edit: Main Page> Dominican Republic> January 1> January 17th> Amelbert (Gamelbert of Michaelsbuch)> Utto> Utto (disambiguation)> Udo> Udo Kier> Barb Wire "Specific gravity" anyone?
Powder (substance) > Fluidized bed combustion > Coal > Mineral > Specific Gravity 5 clicks Who can find... The "Bengal Engineer Group"
The Wikipedia Game Website Come check out the website, let me know what you think. It should be fully up and running by the end of this month. Feel free to link to it, and make sure to tell everyone about the wikipedia game :]. www.thewikipediagame.com
^^not spam^^ But please don't just pimp your site (even if it isn't yours) it is against the forum rules. Other than that welcome to the forums and that site is good, I like the word generator, good for people who don't know what to put next.
Thanks, I'm glad you liked it. And sorry for putting the link up like that, but I don't have a signature so that's all I can do.
advice? hey i was just wondering, whats the best way to get a forum to a solid start? if anyone could help me out getting mine going please email me john "at" thewikipediagame.com of if you have a little time, please just visit the forum on the website and join it. Thanks for all help.
Ididitforthelulz's response directed my attention back to this thread (which I had forgot for a while). I started thinking: is there an algorithmic way to solve this problem? In other words, given two (valid) Wikipedia pages, could a program find the distance between them? Could this program then be used to find the average distance between articles on Wikipedia, thus establishing a type of six degrees of separation for Wikipedia (which is ironically mentioned in the article itself). I don't think it can be done because a program wouldn't get any context clues from the pages. That is, it would have to bring up the "what links here" page and exhaust all the options (effectively doing a breadth-first search). Mind boggling amount of complexity there. And of course you'd be slowed down by HTTP requests. That being said: Main Page → Amphibian → Evolution → Cichlid → Buccochromis → Stripeback hap (This was hard. I had to start at the end since only one page links to it, and it's a disambiguation page.) The next item is cheesecake.
I think it could be done - most of the strings here are about six-seven links long. If we assume each page has 100 pages linking to it and from it, a seven-link string could be found with only needing to load 2,000,000 pages, assuming there are no overlaps. Considering that there are only 2,770,000 pages in the English wiki, I'd say that the overlaps involved will be massive, so filtering those out would massively reduce the size required. As far as context goes, I would say a dictionary attack would be quite reasonable - if a page doesn't link to the generic categories that the target word does, and the page isn't itself general enough to be in a dictionary, it is probably safe to disregard. (would be writing a wikipedia scanner himself, if he wasn't busy with other projects)
After some light digging around and plugging away at a PHP script, I discovered this: So I'm currently download the XML database dump of Wikipedia. All 4GB of it. (Actually, I'll download it tomorrow while I'm at school.) Luckily I just finished a project using DBXML, so I might be able to get some interesting results from this using that. At the very least I'll run some scripts on it.