It could go as bad as it could well. There's not enough of resource x in the country. Kill the largest consumers of x. Now there is enough x in the country. Good job AI. Some human interference would be required somewhere down the line and whoever that is would be the new version of current elites.
I like how this thread has derailed. A theoretical question is of course: can you create artificial intelligence that is truly free from bias? While general AI would probably be free of a lot of typically human irrational behaviour in its decision making, it won't materialise out of thin air. A human still has to implement it, pick its priorities, design its inner workings. It will probably have to be trained; what data will it be "fed"? Who or what will verify its correctness during development? Machines designed by humans will still be a little bit "human" in a way. It won't have emotions, but it will probably have implicit "preferences" inherited from the humans that created it. I agree that assistance from machines will be the way to go forward, but I don't think putting machines in charge would be a perfect solution.
The problem with building any sufficiently advanced AI as to "Rule humanity" as it were, seems to be that the level of control required over it is immense to prevent it potentially commiting actions that are horrifying to the average person, but make perfect logical sense, while also leaving it free enough that vast changes could be enacted for the global good of the world, and where the line between the two is drawn. This is feeling very much like some of the more mindbending moments of The Talos Principle.
There you go again, Nexxo, giving words power when they have none Why would anyone give a second's thought to the message the Kotaku guy received. It's completely hollow of any meaning other than it's some acknowledgement of his article's existence. If my job was to write articles on gaming sites, I'd rather have 1-100k of that sort of response than no response at all. When is a death-threat not a death-threat? When there's zero actual threat of death. Words do not equate to threat - not on a public internet site at least. Let's remember context, people. Where the context = 'Internet', some basic rules of interpretation apply.
The issue theshadow raises is basically the smart-alec genie problem. "Make me the richest person in the world," you say. Boom: everyone apart from you is now below the poverty line. "Make me the most attractive person in the world," you say. Boom: everybody's ugly. The most expedient solution to a given problem isn't always the desired solution to a given problem. You could end child abuse tomorrow by killing all the children; such a path would seem entirely logical to a problem-solving AI, yet is abhorrent to the human mind. Asimov had the right idea with the Three Laws, but an AI bounded by such restrictions wouldn't be much use as a ruling overmind.
Exactly. It would also be as limited as humans are, because it already has what it can and can't do dictated to it before it's even conscious. The thing is, we just don't know how an artificial intelligence would react. There is no other life form on the planet that we could use comparatively. People would judge it based on their own life experiences. There are many factors to consider. Firstly, an AI's learning capacity would be far beyond anything ever before created. It would evolve with such speed. People also say it couldn't develop complex emotions. Why not? Emotion is just as much an evolution of intelligence, as it is a biological development. That's why some of the examples a show like Person of Interest present are quite interesting to think about. Without giving too much away, at one point two super AI's are present. One has essentially been given a sort of childhood and taught to respect life. It shows possible love (or at least a need to please) it's parent. The other is one that has not been nurtured. It has been thrown in to existence and left to work the world out to whatever conclusion it comes to. It's up to the audience to decide if it's actions are good or bad. It's ironic that people would not trust a machine over a person. A machine would be a lot harder to "convince" to knock that heritage site down for a shopping center. A machine doesn't need to inflate expenses claims.
Asimovs rules fall at the first first hurdle as it is dependent on codifying human morality. Since the remaing laws are dependent on each one previous, they also fail as a result(Could we say his laws are recursively dependent?). We can define injury in the physical sense easily but injury can be interpreted as psychological or perhaps in terms damaging assets. Harm is even harder to define. We must injure a person physically in order to medically operate on them in order to prevent them from coming to further harm or even to restore them from current harm. What if killing one person saves a 1000. Both cases maintain continuity with Asimov's first law whilst simultaneously breaking it. In other words, Asimovs laws aren't even close to what we would need. They seem like a good idea at first, but even a minor amount of effort thinking about them in action demonstrates they are too simplistic as a guide for dealing with human society.
I tried game reviewing on a couple of titles. The first one went OK, the second was such a pile of shite I needed to be drunk just to play it. I ended at that. You just can't win.
Experiencing awful products/services is part of being a reviewer. You are also exposed to a much greater volume of whatever it is you review so it seems like finding something truly enjoyable would become much more difficult over time.
I think to be truly successful in games criticism you need a massive following and your own brand/businness. It's not enough to rely on a publication. Couple that with the millions of wannabes and contenders for actually achieving that and it seems like something that is unlikely to pay off.
I'd argue that "oldschool" paper games journalism is falling by the wayside, replaced by big Youtubers, look at the viewing figures on big channels such as AngryJoe, TotalBiscuit or NerdCubed, all of them, to one extent or another, review games, and all of them receive well over 250 thousand views a video on average. I think part of that is just the Medium. Games Magazines used to have a couple of screenshots, where YouTube now lets you see the game in action and make your own judgement based off it moving, outside of the Bullshots that the developer may release. (But this does also depend on the person playing the game being capable of displaying it with some level of finesse, which, from what I heard, landed Polygon in a bit of hot water recently.)
I agree tundra and to be honest I would include youtubers in my definition of games journalism. Video is a better format for critiquing many things.
Just leaving this here as it popped up on my facebook feed tonight - http://www.eji.org/risk-assessments-biased-against-african-americans
John Gabriel's Greater Internet F~wad Theory. https://www.penny-arcade.com/comic/2004/03/19 From 2004 no less. The concept of anonymity on the Internet and the lack of consequences mean people can really type anything and get away with it. I suspect IQ or upbringing plays a part as well as the thought that the Internet is not a real thing. One of my primary school teachers used to say; empty vessels make most noise.