Sunday, 30 October 2011

Death of the Bookstore (week 7 blog)


Is this the future of book stores?
How does an offline bookshop compare to an online bookshop?

First there’s no store front. This means that you don’t have to pay ludicrous amounts of rent to be in a location that people know, that you don’t have to pay to have equipment installed to monitor theft, that installations and aesthetics can now be damned, no money wasted on counter staff. The list continues to go on indefinitely. The other key advantage is that online stores can have warehouses in any location at all, meaning they can have large warehouses in cheap areas and store a larger variety of books in these locations.

But what does an offline shop offer to people? The first one is the same for every offline store – the ability to impulse buy and have it there and then. Everything else depends on the store with some offering nothing more than a shop front but at least some of them are moving towards having cafés and things inside so that you can drink coffee while you wait. Some say there is also human interaction in that the staff of the store make it an experience, however this is very hit and miss and not every store will have great staff. So in the end it comes down to the previously mentioned experience.

So for me that’s the difference. Online shops are cheaper but you have to wait for goods to be delivered where as offline stores offer social interaction and immediacy of owning goods.

In asking whether the bookstore can be saved we begin to start looking into how they can offer other things that online stores can’t offer. The social experience is good, but even regular hosting of events such as book signings and readings can only do so much. Niche market bookstores can be established to sell titles that even stores like Amazon wouldn’t have heard of, but these are generally suited to online retailers.
I would go as far as to say that leveraging off social experiences is only the first part and would be at best a means of putting off the inevitable. If they could address either the cost or the selection issues then possibly they may be able to continue to compete, but if both remain uncontested then it’s only a matter of time.

Did social media win the revolution? (week 11 blog)


The Arab Spring is a recent event as far as historical occasions go (aren't all times when governments are overthrown historical?) which refers to the recent revolutions that have recently been occurring in the MENA region with one of the prominent features being the use of the internet amongst members as a means of communication. However some state that without the internet these revolutions would not have occurred.

I am hesitant to attribute the success of anything to a single element, as doing so belittles other important aspects of revolutions such as the civil unrest that had slowly been building in these areas but at the same time I can’t clearly state that the revolution would have gone ahead without the internet.

Not all revolutions needed the internet, but would it have sped things up if they had it?

Part of the reason behind this is that I think there are two elements in revolutions that determine ultimately determine the winner.  The first is the organisation, as an unorganised group can be easily controlled by the authorities and the second is the number of people, as no matter how poorly organised a force is if it has enough people it becomes unstoppable. So the question then becomes whether the Internet was the key difference in either of these issues which is difficult to do.

In terms of technology that has been used in revolutions, unsurprisingly it seems to be whatever form of communication is dominant and available to the people at the time. Radio, fax machines, telephones and many others were used as forms of both organising people and also as attempts to incite people against the standing regime. While it can argued that without the internet people wouldn’t have been able to communicate, they may have just defaulted to an older form of technology (such as what happened in Egypt when the Internet shut down). However this doesn’t tell us whether these forms of communication would have reached as many people.

Opponents of the theory of the Internet being behind the revolution have stated things like “but Facebook is just where people were”, but they fail to take into account that this statement suggests that the reason that enough people heard about the protests, poor conditions and the likes is because of things such as Facebook and Twitter which allowed news and discontent to spread like fire. Despite this however I am again drawn to admit that although it addressed a lot of people rapidly this does not mean that it was the key difference between success and failure.

In fact, no matter what way you look at it the only conclusion you can draw from the use of the Internet in terms of revolutions is that it is a better technology then what happened before, but given we cannot have two revolutions occurring, one with Internet and one without, we are unable to tell with absolute certainty whether the Internet served as the ignition point for the fire of revolution, or was instead just added fuel which hastened the outcome.

Convergent Technology (week 6 blog)


So I have a new phone, one of the Samsung Galaxy S2 deals and I’ve only just started pulling it apart and seeing how it work. It’s got voice recognition stuff, apps for checking FaceBook and reddit and I suspect it can even call people, although I’ve only used skype and free-wifi to do this so I’m not 100% sure. It has apps that let it work as a media centre and if I could be bothered going through the programming or getting the apps I could probably make it control my TV and computer.

It’s a convergent technology – Something that has taken previous ideas and mixed them together to present something different and new.

The PS3 is also an example of a convergent technology

The reason that I state it is different and new is that it can achieve things that the other devices separately would not be able to achieve. For example, a schedule only functions as I enter information into it, not taking into account details from other schedules. As my phone works off the Android system it automatically synchronises my schedules from Facebook, Google Calenders as well as the individual one I have on the phone itself. Although you could state that this additional functionality was something that could be achieved by my direct interaction, the point of it is that by being connected it doesn’t need it.

That’s what I see as the point of convergent technologies – it takes every intermediary step that we as humans would have had to have done to transfer information from one piece of technology to another and does itself. It saves a lot of time in processing terms as calculations now happen at electronic speeds allowing hundreds of calculations per second with the data as opposed to waiting for people to update every step of the way.  While I can’t state what uses this has other than real time automatic FourSquare updates (I think that’s how it works?) I’m not going to preclude anything from happening any time soon with this information.

However is this the end? At the moment I don’t think so. I’ve read far too much science fiction and think that we won’t be happy until we achieve the singularity of devices, in which you will carry around one object which will do everything – start your car and drive it for you, have coffee ordered when you start to feel sleepy, essentially something to run our lives for us and leave us with only the important stuff to do. I dare hazard a guess as to the way recent trends go that this will probably be some form of revolutionary Angry Birds game.

Thursday, 27 October 2011

“I'm going to destroy Android, because it's a stolen product”


Steve Jobs is dead, and like every other time a person of renown dies their biography has stampeded onto the market. The above is one of the key quotes that I found in the highlights because it addresses the ideas of the iPhone and Android conflict. The titled statement is from Steve Jobs, according to his biographer, but we have to take this one apart a bit. Clearly it isn’t the same product as the tech specs between the phones are different. Additionally the software in its entirety isn’t stolen either because they both function in slightly different ways.
 
What did android steal from the iPhone?
We’ll take care of the obvious stuff here – They share a similar design concept. There are many arguments that could be made for how certain elements are subtly different, or that others were improved upon drastically but when it comes down to it they do look similar – in both the software and the hardware.
Imagine that you have an iPhone and you show it to an elderly person whose idea of a complex phone is something that can send messages. So in essence someone that doesn’t have a great idea of technology. You are allowed 30 seconds to tell them about it and it’s features. Now imagine it’s a week later and that you have an Android and you show it to the same elderly person. You are given 30 seconds to explain it to them and tell them about its features. They wouldn’t be able to tell the difference between them.
So there is a lot of fuss on the part of Apple that Google is stealing the ideas behind the development of their smartphones and the fear can be argued either way as to whether it is justifiable or if it isn’t. You can say the same thing for tablet PC’s and you’d be correct there, with Apple filing copyright infringement documents to prevent the sale of any of their competitors devices on the grounds that they are too similar in appearance.
But why does this matter? We can argue that it isn’t about the money. Jobs himself has been quoted as saying
“I don't want your money. If you offer me $5 billion, I won't want it. I've got plenty of money. I want you to stop using our ideas in Android, that's all I want.”
And until contrarily shown we can agree that this must be accurate.
What else? From here everything is speculation and I have one theory– The thing that Jobs was afraid of being stolen was credit for the revolution
At the news of the death of Steve Jobs, the media ran with many different headlines as you may recall, but pretty much all of them had a variation of “Death of the man who revolutionised the world with the iPod” – and I’ll probably be blasted for making this accusation against a recently departed individual but I think that’s exactly what he wanted everyone to do – to credit him and ONLY him with these inventions that changed the world.
I’d also ask you who invented the light bulb? If you stated it was Edison then you’d be wrong. There were many other inventors before Edison, the most well known being Swan, whose light bulbs were used to provide light to England’s streets but even his version was a revision of a less famous inventors idea. But to keep a long story short the reasons that Edison became accredited with the invention was that his version was more popular and used almost everywhere in place of the Swan light bulb. It just became an assumption that since it was the only player in the market it had to be the first.
Apple has also played this card before. When they released the iPod, the device that revolutionised the mp3 players of the time, there already existed another device that did pretty much the exact same thing – The Zen Creative. Again, this is another thing where history only remembered the victor of that war with most people attributing the technology to Apple and the iPod.
Visual comparison between the original Zen and iPod

It is the turnabout of this matter that Apple now fears, as the Android looks like it may very well overtake the iPhone in terms of people using it. Jobs stated that it wasn’t about the money and in part he’s right, it’s about making sure that Apple got the footnote as being the revolutionary smartphone. That this would have allowed them to leverage their market presence as the leaders in innovation for the next i-whatever wouldn’t have hurt the bottom line either, but I suspect it was mostly a battle of egos to be remembered as the creative genius behind the smart phone.

RFID Implants


Imagine a world in which you no longer needed a set of keys to open your door, but instead all you had to do was wave your arm. Turning on your car, your TV and anything else with a simple gesture of your hand are all things that are possible thanks to the idea of RFID implants.
These implants are nothing more than devices that send out little pieces of data on radio waves that contain enough information to identify things, usually as a stream of numbers and letters. The data can be used to track packages or even contain information about the owner of an animal (pet microchipping) and is something that is gradually moving towards humans.
There are immediate easily identifiable uses. The idea I gave at the beginning is something that has been achieved by multiple individuals as early as 1998 (I’d recommend looking up Kevin Warwick and Amal Grafstraa on Google if you have the time). Some clubs in Europe at the moment use these as VIP passes, if you don’t have one you can’t get into the club. Other uses include the aforementioned things in the first paragraph, the setting up any piece of technology in your home that currently requires a switch to be activated with the wave of a hand. Like everything else though – it is impossible to have only positives.
One of the greatest concerns that I personally feel about RFID implants come from the security risks that this presents. The first one may not seem that concerning but I’ll go through it – The idea of cloning RFID tags. To put it as simply as possible, the code that is sent from an RFID tag is what identifies an individual so anyone who happens to pick up this code and put it on a separate RFID tag will effectively have stolen your identity. You wouldn’t even be aware as the scanning distance can easily be over a few meters (significantly larger in other cases). If the RFID has access to your banking details then the thieves have easy access to your account.
The second fear is that of uberveillance, the process of constantly being monitored by use of people tracking your implants. Starting small, if a business makes it mandatory to implant RFID tags in its workers then by simply establishing a set of scanners at various locations in the office they will be constantly able to monitor your position, knowing how much time you spend at your desk, on coffee breaks, what time you started to a second and when you left.
As for the big stuff, how many people remember the Australia card (do they still cover that topic in history the school certificate?). A replacement of Medicare, passport and drivers license, it was to be an identifying card that had every piece of information on you in a single location, providing governments with the ability to effectively monitor everything you did. While at the moment I have nothing but contempt for the Australian Government and its ability to implement technological solutions to anything, there is still a fear one day that RFID tags could potentially be used as a means of tracking individuals everywhere in the country.
It is for these reasons that I am cautious about RFID tags, however I intend to see where the technology develops before I make any final decisions.

Sunday, 16 October 2011

Government integrity has sprung a couple of wikileaks (Week 9 Blog)

I suspect I'm going to get a lot of grief about the title for this one, but it's the best I can do at 2am.


Last month Wikileaks released the cables that it was holding, giving everyone access to the content provided that they could download it. Most of these are fairly innocent in nature or contain nothing of any any interest. However others are the exact opposite, containing information that is damning and has great political and economic implications.

There has been some speculation however as to whether this is the best method of delivering the information to the public. The matter is complex and requires some unpacking before it can be properly addressed.

First and foremost to be considered is that the information had no other avenue of being released for the public. Previous cases such as the 12 July 2007 Baghdad airstrike involving the deaths of Reuters journalists,  have demonstrated that freedom of information acts do not always get the desired details and that without Wikileaks this would most likely have remained buried. The main avenues that people use for dissemination of news as well, such as media organizations like Al-Jazeera have been shown within these cables to be complicit with US government requests to filter their coverage of events.

So in this sense it can be stated that there is some information that will not be distributed to the public unless Wikileaks steps in and so it can be described as performing an important social function, the fourth estate.

The second common idea on the topic and one against Wikileaks is that the information puts people in danger when they are undercover and also in the middle of military operations.

This is a very interesting point that some people make, in that if we try to research corruption we may instead be endangering the lives of others. But if this is the case, we have to ask what type of situation are we dealing with? To me it appears to be nothing more than a hostage situation in that a corrupt government can hide their deeds in such a way that uncovering them may endanger innocent lives.

But to argue the ethics of hostage situations is not the point of this blog and would serve as nothing more than a distraction from the topic at hand. These two points are for me the crux of the issue and for many others as well. But here’s the interesting thing – neither of these states that corruption isn’t occurring, isn’t in itself endangering lives or other things that we value but is an argument about whether the public should know about it.

Overall I would state that some people are going to be worse off because of Wikileaks – mainly company directors, politicians and other world figures. There will of course be some individuals at risk but mostly it will be the criminals and others that are revealed and in this sense I would like to think that the risks are worth it and that Wikileaks serves a vital social function.

Government hacking


I'll be honest and say that not all of the things in this blog apply to Australia as the cases come from Germany (although I'm lead to believe Australia also purchased the same DVD's as Germany). There’s probably hundreds of other cases like them but these are the two that stick out the most in my mind.

One of the articles I found today on Reddit stated that the German government had recently purchased a few DVD’s containing personal information stolen from banks in Luxemburg in Switzerland. Unsurprisingly (we all know the cliché of Swiss tax havens) the information contained evidence that citizens within Germany were evading paying taxes, and the government is using this information to prosecute people. It has been approved by the High Court in Germany.

The reason this stuck out is that in the past it was also discovered that the German government  commissioned theproduction and release of malware that installs itself on computers and provides them with the ability to observe all actions taken on the computer and even remotely control the PC’s.

Is this legal?

Well one of the key benefits of being a government is that you get to decide whats legal, so the question doesn’t really apply here.

Is this ethical?

It’s a complex matter. Part of this is that government endorsed hacking at the moment is only being obtained to prosecute criminals. If the bank details didn’t contain any illegal information transaction history then no one would have been arrested or convicted, so it was merely the government going through illegal material.

The second part states that this is effectively the government giving approval to hackers. Regardless of the reasoning behind the hackers original motives, be they black, grey or white hat, they hacked into a financial institution and rather than being reprimanded by governments have been given fabulous payments of cash for the data. I can see no way that this could be interpreted as anything other than an incentive to keep doing this sort of thing.

There is a counter argument that can be made that this is preventing the hackers from using the data in immoral ways, such as identity fraud or other operations and is instead being only used to combat crime. This does not preclude the option of the data being used for identity fraud and it could very well also encourage hackers to start collecting as much private information as possible on citizens to sell to the governments, creating a surveillance state.

Further still, this gives no consideration to the other impacts that endorsed hacking or malware may cause to society. One of the key concerns with the governments malware was that it allowed control over the pc it infected, which should lead to two immediate problems. The first is the fear of corruption, in that with access to your computer they can have the pc download illegal content and state that it was you who did so. The second fear is similar in that we do not know the people who will have access to this malware.
The reason this is a separate concern from corruption is that the code that was used in the Malware was poorly created and had a lot of security flaws, to the point that anyone with technical knowledge would have access to it. In fact, one group has now reverse engineered the code and made a simple user interface so even the technical knowledge is unnecessary which means anyone can access your computer, the information on it and also take control over the computer.

In light of these concerns I feel that I could state that these actions are unethical.

However this isn’t something that can be merely concluded by stating that the actions are unethical or not. We are coming to a point where the internet is involved in our lives in progressively complex ways. We do not live out our lives in person as much, but rely on the internet to conduct a large portion of it. With these developments can we should not be surprised that governments are using the internet in ways to solve problems, but rather that they have taken so long to do so. Caution is advised, as although these are mostly isolated incidents in poor decision making it is imperative that the space is watched for further developments as to whether they were merely steps on a learning curve or instead an ominous foreshadowing of government surveillance in the future.

Sunday, 18 September 2011

Truth by numbers (Week 8 - Citizen Journalism)


When someone says that something has happened you can look at it in many different ways when trying to determine the value of the statement.

They could be lying, they could have misunderstood what happened or they may be right, but it’s hard to work out based on the statement by itself. You need to judge contextual elements like how well you know the person, what benefits do they gain by lying as opposed to telling the truth, etc. Hundreds of things, and the bigger the thing that happened the more reason you’ll have to doubt them.

Of course society has told us that there is one group of individuals who can generally be trusted – the journalists.  We assume that given they have the pride of the organisation they work for at stake as well as their own journalistic integrity that they will be honest in their retelling of events. . There are numerous examples in which this is wrong but for the most part its an accepted part of the world, that if an event is reported by a major newspaper then it happened.

Sure that’s great for reporters, but what if the person telling you about an event isn’t a person? Sure, we can check a news site to see if the event occurred, but that still leaves us being unable to trust the average person and leaving the news publishers as the default fountain of truth.

It’s a bit of a tricky matter, but what about if it wasn’t just one person tell you that something happened, but say a hundred, or a thousand? Sure as indicated before each person may have a reason to be lying to you but the odds of having so many people together who either want to misdirect you or didn’t see the event in the right way drops with each additional person that says the event happened. This is because the larger number of witnesses changes the perspective of the event from requiring a qualitative analysis to a quantitative analysis.

This creates a system of truth by numbers, truth by majority or any other way you want to phrase it and is the key supporting feature of citizen journalism, creating believable news by nothing more than having sheer numbers.

This quantitative news reporting actually has a few key advantages over regular journalism.  Regular journalism is ideally driven by its effort to maintain the integrity in its reporting, forcing them to analyse the quality of the reports and sources before they are published in papers which can cause delays. Also given that they are limited in the number of reporters that exist they may not be able to cover the event immediately. Citizen journalism can instead focus on relaying events as they occur providing a more rapid dissemination of information. Any incorrect information should ideally be drowned out by accurate reports.

The second element is that newspapers are often limited in the information they can include based on space in the article, not having a source on the event or just not even having a reporter in the area. Citizen journalism on the other hand is more inclusive – as long as someone saw it happen it can be reported and added to the hulking mass of information.

Of course having truth by numbers isn’t without its own problems. Although I mentioned that the odds of a large number of people getting something wrong is unlikely, it isn’t impossible. Nor is it impossible for the information to be misunderstood, or misrepresented such as the scope of the incident (ie a large community in a small area could generate enough noise to make people believe it was over the expanse of a much larger area).  Further still only the really big news of hostages, wars or other similar major events gets enough people posting about it for it to become big enough to make an impact on twitter trends.

But still, citizen journalism is a thing that exists and continues to evolve. We have the information available to us, but we are currently lacking in the tools to get the important details in a form we can easily digest. Given another ten years however and it could very well develop into the major source of news for most people, surpassing that of regular journalism.

Monday, 29 August 2011

Dead mens money (Or : Why I feel copyright should change)

So the idea of copyright is that if someone creates an idea that can be expressed in a form of media (speech, song, book, etc) then it will be protected by law, giving them a monopoly on it and allows them to make as much of a profit as possible.

It’s a fantastic idea in theory as I’m sure that each and every one of us would want to make as much of a profit as possible for our work.

However there are a few problems that I personally have with the system, which spread over the duration of protection, corporate control and ownership and methods of enforcing protection.

Firstly my biggest concern is the duration of protection that is afforded to the creator of works, which at the moment lasts for the life of the creator plus 70 years. This effectively puts a stranglehold on an idea or a concept for about three generations, placing it inside a protective bubble from which the world can view it, but never interact with it on their terms.

This may seem like a bit of an extreme way to state the obvious here but it should probably be mentioned anyway. The average lifespan of aperson in Australia is 81.5 years which means that if a work is created and you were over the age of 12, you statistically speaking will be dead before you can do anything with the work. That’s assuming the author died today as well. If the author was at age 20 when they made the work and lived to the average lifespan not only will you be dead, but your children will also be dead and your children’s children won’t be looking long for this world either. It effectively seals ideas away from the hands of at least one generation, with potentially up to four being affected.

I don’t know about you but to me that seems like madness. Sure, people want to protect their work but a three generation embargo on certain ideas is a touch excessive. After all you won’t be around for the last 70 years so who is actually profiting off of the content?

This leads me to the next issue that I had with copyright, who actually gets control over the work. I used to work in a fairly large company, which had some interesting terms in the contract I signed. One of the big things included was that any intellectual property that I created while at my place of work was the property of my employer. This was because we dealt in information and often there were better ways to perform certain tasks, such as programming a search of a system or so forth. All self explanatory really, as the systems between competitors were often similar and my employer didn’t want me to sell off anything that’d give the competitors an advantage.

However the contract wasn’t worded to specifically include the work that related to systems, nor was it only for work done in the office so there was potential for any idea I created at home on a weekend to end up as their IP, which could prevent me from working on it until I die plus 70 years.

I imagine the contracts are slightly worse in intellectual property generating industries such as the music industry or publishing industry but the idea remains much the same – Although they have free reign in the creation of an idea they are unable to edit it without permission from their employers. As mentioned before, this system is in place until the day the creators die, plus 70 years making it impossible for them to work with their ideas.

Lastly is the process of enforcement of the protection of copyright.

In gaming this is a ‘Big Thing’ under the title of DRM (digital restrictions rights management). It is often a process of not trusting people to do the right thing when it comes to purchasing games and instead put restrictions on the media that causes it to monitor what you are doing. The more benign form of DRM is that it just puts a copy restriction on the disc, preventing it from being copied. The more malignant form is where they create entire programs that scan your computer for any program that has a connection with software piracy, after which it prevents you from playing the game. There are more forms of DRM and it works across most media industries but it equates to the same thing.

Controlling how you use your media after you have paid for it.

Now we can go back once again to my favourite hypothetical example.

I’ve mentioned before that copyright lasts 70 years after the creators death,  and that the creator can actually lose access to copyright by working for a company. The idea of the above DRM added to it means that if a content creator is fired from a media production agency they can now be forced to pay to access their idea, and also be under constant surveillance while viewing it. Worse still is that this is 100% legal and will enforce this to the day of the creators death.

Thinking in terms of the amount of people that are restricted access for the 70+ years after content is generated I find it difficult to agree with the concept of copyright as it stands. Any idea that relates to after the death of the content creator is far too restrictive as it can easily prevent a generation from accessing an idea. I’m not sure what system best preserves the interest of everyone after this point, but all I can say with certainty is that the system needs to change to provide more access to everyone.

Thursday, 18 August 2011

Trapped in a loop (Week 4 blog: work life permeating into home life)

One of the great things about office work is that there’s always a connection between you and the office, so if you ever want to get ahead on your work there’s always an option for you to do so – after all, if you get more work done now it means you don’t have to put as much effort in on the day right?

I could barely type that with a straight face after having worked in a job with information technology for the past 3 or so years and always being on call, but I remember when I first started how this was exactly the thought process amongst all the new employees. What we didn’t realise is that if we responded to emails after hours, it would become an expectation and subsequently a demand of the boss to make sure that we were always up to date with emails and work. This meant that our first few weeks of lazing back eventually came to bite us hard as our workloads were increased, with extra work being assigned by email (which had an auto respond when read, so the boss knew you’d viewed it) that couldn’t be easily avoided.

Get it? It's Office Space and enough people have done this topic its beating a dead horse. Ah bugger it...


In a tutorial today this was the point of the discussion – While communications technologies have allowed people to be ever more connected with the internet it now has the added side effect of connecting them with their place of work, allowing them to check emails at whatever time in the morning, never really giving them a chance to take a break. If my above example isn’t proof of this occuring I’m not sure what else I can include to demonstrate it.

But the other point that no one seemed able to address was how anyone can create a strict demarcation between work life and home life in an ever connecting world. Some students in the class pointed out that manual labour jobs often had a strict clock on and clock off time, but I remember working as a manager in retail we were often encouraged to get employees to do as much work after they clock off as we possibly can (ie empty a trash can on the way out, move some stock, etc…), and I can’t help but think of the old days where information workers would just stay in the office until late in the evening to get work done, so I don’t think that’s an option.

The other alternatives I can think of and have tried with varying degrees of success were A) don’t do the extra work. Meant I had to do more work on the day but at least I got more time off. B) be unethical in your work. This means employ every trick possible to get out of work such as re-allocating it, creating a fictitious report you have to work on and convincing the boss its real and finally setting up an auto reply for your email that says it failed to deliver to your address. C) Work harder and have the backlog cleared by the end of the day. This one isn’t even viable all the time, due to the sheer number of hours it takes to do anything even when working to the best of your ability

These three options that I saw were always something that I felt weren’t quite as productive for the business as possible, however there’s just too much of an ingrained culture in most businesses of working outside of work hours for me to see any alternative solutions other than to quit your job and try and find one with better conditions.

It’s something that I feel is going to be an increasing problem in the workforce as we grow ever more connected it means that employers gain an ever increasing control over what they can see in our lives and how they can interact with it, not just through the ideas of emails and Smartphone communication but even things such as social networking websites where they can position themselves as your friends.

Sunday, 14 August 2011

Shiny stuff on the internet (or why cyber libertarianism hasn't been achieved)

In the tutorial for the class a discussion arose on the idea of why the internet had not reached the utopian ideals envisioned by the early cyber libertarians, to which some people made a comment that it was a new technology or that it there was too much control being exerted by the government in an attempt to reign it in under their power.
This reminded me of an image that I found online a while backthat is far too long to be contained in this post. I’d recommend reading it before carrying on.
In order to attempt to try and understand what could have caused the cyber libertarian ideals to fall short I consider my browsing habits, what I use the internet for and the information that I tend to consume on a regular basis. When I’m bored I have access to millions of articles that I can read to further educate myself, I have methods of exposing myself to hundreds of new viewpoints and grow as an individual. But I don’t. Instead I go to reddit, click everything from imgur and spend the next two hours looking at pictures of funny cats or political commentary. Ironically this is how the above image was discovered – by chance as I was looking for something to entertain me until it was time for sleep, the next tutorial or whatever. I honestly don’t think that this is something that would be too uncommon to hear from other people except replacing reddit with facebook, myspace, or any other website.
What makes the issue complex here isn’t that there’s an antagonistic force at hand trying to remain in control that’s creating these distractions for me, but my love of distractions and indifference to bettering myself that restrict it, along with other people’s desire to be ‘internet famous’ through raising of karma (or whatever equivalent exists for your preferred time killing website) that does it. I tend to think that any government body secretly trying to control us could only do a worse job.
It is this love of distraction that I find so easily in myself and suspect many others have that leads me to the slightly pessimistic belief that the entire reason that the cyber libertarianism movement failed isn’t because of some limitation to the technology or restriction placed on the individuals by the government, but instead the limitations of humanity with the lack of desire in the majority people to see it through. Although this could very well be seen as an optimistic viewpoint, as if the restriction is entirely within our heads at the moment then there is the opportunity for change.

Monday, 8 August 2011

Ethics of online gaming spaces


How to describe the MMORPG? The easiest way to describe this would be to say that it’s a social game, with several gameplay elements around killing monsters, aliens and whatnot. People build strong friendships online and in the event of a player’s death in the real world have been known to hold funerals to honour them. However, one of the gameplay elements of World of Warcraft was the ability for different groups of players being able to fight and kill eachother. While it should be obvious to see how these two could mix in a bad way, during a funeral a raid group came along and slaughtered everyone.

Many of the players who were involved said that it was disrespectful and demanded the agressors should have been punished, but Blizzard did not act. One of the things that I noticed from events such as this were that there was a general theory of ‘code is law’ that existed in the attacking group, or rather a theory among gamers that if the game world allows you to do something then it is not illegal to do so.

To further explain this. Imagine you see something that you want but could never afford. You know that society has laws in place that prevent you from stealing, but you are still able to. In a sense the law acts as a deterrent and avenue for punishment. In the game world it doesn’t quite work like that, as rather than having punishments for theft of objects they just make it impossible to do as all your possible actions are coded into the game. As you can’t steal anything, what would be the point of making any law or threat of punishment?

To this sense World of Warcraft doesn’t have many laws at all restricting what players can do (most of these are related to either external program use, chat harassment and gold selling) so in this sense, there is a belief that if your player can do something then no matter how unethical it is there will be no punishment for it. Lessig (2006) argues that this is one of the main features of the virtual world, in that it relies more on the social graces of players to make a good society rather than any real world laws.

Another example would be in the game of EVE online (famous for its politics and back stabbing) where a player created a banking system that functioned like a real world bank (IGN, 2003). Players could make deposits, withdrawals, get loans at variable rates and all the other things a bank does. The CEO of this virtual bank, Ricdic, one day decided he had enough money and ran. The most interesting thing about this is that the developers of EVE couldn’t ban him for that, as it wasn’t against the code of the game.

There are of course more examples (EVE online would provide a massive backlog of these), but each of these raise interesting points. Although the actions of disrespecting the dead and embezzlement are crimes in the real world, in gaming worlds they are only unethical. Is this something that should be addressed by real world standards of laws and judgements, or should they instead be relying on the ethical considerations of the game designers to limit the actions of the player?
References
IGN, 2003, 'EVE online bank scandal'
Lessig, L 2006, 'Four puzzles from cyber space', Code Version 2.0 pp 9-30

Sunday, 7 August 2011

Introduction

Hey everyone, just managed to get back from a combined adventure of sisters wedding and then a snowboarding trip which is why I've been pretty much as silent as a ghost for the past week and a bit.

I'm somewhat hesitant in introducing myself through blogspot as I find that I tend to write out a four paragraph introduction only to delete it after staring at it for 20 seconds, so I guess I'll go with my usual introduction on these blogs.

My name is Owen Godfrey, I'm an ex Computer Science student who majored in game design where I quit uni after realising that I didn't want to be stuck behind a desk doing whatever some guy in a suit told me to do for the rest of my life. My solution to this was to go work in a bank behind a desk for two years before realising I completely screwed that up and thought that I should probably go back to uni and at least get a degree, even if it was a useless piece of paper after graduation. Deciding on the bachelor of Arts as the most useless degree I enrolled in a major that was immediately cancelled after its creation, leaving me with a lot of free time to pick up interesting subjects. My academic interests are based mostly around IT, gaming and electronic cultures.

Outside of my academic life I'm generally laid back in what I do, which is one of the requirements for the hobby that has fulfilled the majority of my time - gaming. As you probably got from the upper paragraphs I'm fairly into gaming, where if I'm not playing games then I'm meeting with friends in a small indie group trying to create something. To keep this thing as short as possible without going into my gaming history - both my parents are gamers from the days of pen and pencil gaming (d&d for example) and they kind of just passed the interest on to me. I'm pretty sure I was raised with a controller or joystick of some form in my hands and its lead me to where I am today.

So again - Sorry for the huge delay in presenting this introductory blog and I look forward to seeing you all in class.