Wednesday, 24 July 2013

Video Games! WebRTC for Gaming Purposes...

Note this is the second in a series of blogs, I'm writing on the topic of WebRTC, the focus of this one is the gaming applications of WebRTC. For an overview of WebRTC click here and for a tutorial on DataChannels - here. Large sections of this one however are taken from an essay, I wrote on the subject of camera feeds in WebRTC. You can see the original application here.


WebRTC is a project focused on bringing Real Time Communication (RTC) to the web, currently in draft status the W3C considers it “neither complete nor stable, and assuch is not yet suitable for commercial implementation. However, earlyexperimentation is encouraged.”  Despite being at such an early stage much of the WebRTC project is supported by Chrome and Firefox’s developer editions and in fact some aspects of the project have made it into a stable release for Google Chrome. Whilst a project focusing on real time communication may not initially sound like it lends itself particularly well to gaming, it opens up several doors for game developers. Some key aspects of the project are creating an API for accessing feeds from cameras and microphones connected to the device and creating a robust set of tools for streaming audio, video and data on a peer-to-peer basis between browsers. 

The console industry has recently been seen to obsess over cameras and motion capture and control, further having seen Playstation 4’s stereo camera and ‘light bar’ controller it is safe to assume this will continue at least into the start of the next generation of gaming. In light of this, allowing browsers access to video streams has a lot of potential for gaming applications over the next few years. Similarly, with the popularity of online gaming constantly on the up a simple way of creating fast, peer-to-peer data connections and audio streaming should open a lot of doors. It’d be fair to expect the web app sector to follow similar patterns to the mobile and social sectors in creating low cost or freemium apps, in which case a core focus should be making these available to the much wider market of social gamers who traditionally won’t have access to the same high speed internet connections as is expected of core gamers.


To this end I created a proof-of-concept in the area of using WebRTC for gaming applications, as suggested in the WebRTC draft it is an experiment and given the specification is considered incomplete could not be considered a commercially viable product at this time. That being said, the application seeks to emulate the motion gaming experiences of the previous generation, it is particularly influenced by the EyeToy series of games (Sony Computer Entertainment Europe, 2003-2008) for the Playstation 2, which - quite like this application - only assumed access to a single camera with no infrared depth detection or stereo images.

The web app’s main goals are to utilise the WebRTC API as it currently exists in Google Chrome to access the user’s camera feed, from here the application takes images from the video feed into a rendering surface where it can be accessed at a per pixel level using HTML5 Canvas routines. This access is used to check for movement within the image and create a “heat map” of movement which can later be used by games. Finally, a basic menu system and game is implemented on top of this technology to show its success and the live video feed is augmented with game components then displayed to the user.

The application currently only works in Chrome v23 and up and as with all camera based games requires some setting up, firstly ensure the environment is appropriate for playing, the player will preferably stand back a few feet from the camera, however as long as the players body does not extend too far out of the centre of the screen the game should be playable. In fact, the game includes a transparent outline of a human body which suggests roughly where the player should be on screen.  Secondly the lighting conditions whilst not very strict may need some adjusting, generally the player should be under an even light, the room should neither be too dark nor too bright and it is preferable if the light source is in front of the player.

At its heart this project, seeks to very simply highlight the possibilities provided to game developers by some of the inclusions in the WebRTC project and as such it would be beneficial to reflect on ways in which the technology could be further advanced both within the original game application and looking more widely at other ways in which the aspects of WebRTC could be furthered in games and other relevant applications. As earlier mentioned there are two really exciting aspects of the real time communications project, as well as the access to audio and video feeds which is demonstrated in this application, the work done around peer-to-peer networking and streaming is quite exciting and some applications of this will also be investigated further.

Looking at the camera feed side of things one interesting prospect is to further the idea of augmented reality (AR) within HTML5 applications, whilst it could be argued that the application for this project does demonstrate augmented reality by overlaying the environment with objects and allowing the player to interact with them, AR has advanced well beyond this in other sectors and by looking at some recent successes some brilliant AR projects could very quickly be brought to the web. To provide some tangible examples consider the recent work of Playstation in creating games to inspire the imagination of younger children such as EyePet (Sony Computer Entertainment, 2009) which uses a camera, a bit of space in your room and either a Move controller or trackable card to let you create, groom and play with your own Augmented Reality pet. This is something that could quite easily be replicated given the technology provided with a standard desktop or laptop, in fact working with some of the open source AR software that already exists it shouldn’t be an overly complicated job to port it to Javascript, although efficiency may well be the issue. At this stage there are even unofficial drivers for the Playstation Move controller which may make them usable with the HTML5 Game Controllers API.  

Similarly Playstation is potentially showing the way forward with its new Wonderbook accessory which works with the Playstation Eye camera and PS Move controller to create fully immersive experiences, this style of play has a lot of potential in early learning and encouraging interaction from younger children however the expense of a Playstation 3 doesn’t make it a possibility on many family budgets, a web browser version could be much more accessible whilst still just as feature rich. A fellow student on the University of Abertay Dundee’s Computer Games Technology Course researched another strong potential application of augmented reality, namely the possibility of creating content for games in the real world. As well as being an interesting way of increasing the replayability of a games application, physical level creation could provide similar early learning ways of encouraging children to interact with their world and the current research on this subject should make for interesting reading in conjunction with this technology. (Barbour, 2013)


However, instead of developing the augmented reality part of this application there is a lot of potential in looking more at the motion capture, whilst there don’t appear to be any games that focus so explicitly on advanced motion tracking as a feature it can be used to make interaction much more intuitive and this has been explored in a number of gaming and non-gaming platforms. Kinect for Xbox 360 highlights most of these uses, at the most basic level being able to recognise a human figure would allow for much simpler UI interactions – the player might be able to just hover their hand over a button rather than constantly making motion over it, as is required in the application currently. Similarly, by tracking the motion of a limb rather than acknowledging that something moved in a certain area more accuracy could be achieved in collision response creating a more immersive game. Finally, in a lot of game types recognising and tracking the player rather than tracking basic motion would go a long way towards making it harder to cheat in the game.

Another development in the area of motion tracking which has recently gained some traction is the focus on tracking aspects of the face, this was somewhat popularised by Samsung in its “Smart Stay” feature for the Galaxy S 3 generation of mobile devices which kept the screen on in whilst the phones user was looking at it, allowing the user to read a full article without worrying about the screen timing out. This was further developed in the recently released S4 generation of devices which included “Smart Scroll” and “Smart Pause” which intends to scroll down the screen as the user reads to the bottom and pauses videos when the user looks away respectively. Whilst the implementation of Smart Scrolling could be useful on any blog or website that requires excessive reading, it could be very interesting if this were brought into gaming. For example in the application for this project it could be interesting if the player was simply required to look at each corner to select that token. This would also be a good way of bringing the player closer to the screen for the game, instead of being required to move back to avoid accidentally brushing a token with their arm. Beyond that, charities like Special Effect which work to make games available to people who traditionally can’t play them due to disability would be able to use such a system to revolutionise delivery of their work, which currently revolves around loaning people the user input devices necessary to make these styles of play possible.  It also doesn’t seem entirely unfeasible given the existence of OpenGazer, an open source application for tracking gaze which has in fact already gained support from Samsung.

It would be very interesting to see a Javascript port for the OpenGazer project however this is where the concerns about the future of Javascript start to become apparent. The image processing required even for the basic motion tracking used in this application began to put burdens on the system when run at certain speeds and whilst it didn’t slow down the application overall too much once the system is developed to scan the image for trackables or body parts, it’s going to be increasingly difficult to cope with the overheads. Whilst this can be somewhat mitigated using the new threading support in HTML5 Web Workers threading can only go so far and there is so much more potential, for example in native Android and iOS code, for the time being leaving a gap that does need to be bridged for HTML5 to become a genuine competitor in the app space. Similarly, it is currently possible to offload some of this work to C++ through plugins and various other extensions but the point of HTML5 is to make it less dependent on plugins and to have to offload the work to other languages similarly overcomplicates the process and starts limiting the platform that applications can run on.

The second key aspect of the WebRTC project is the peer-to-peer (P2P) networking aspect of it, designed for fast video and audio conference calling it comprises 2 different approaches. RTCPeerConnection is audio and video streaming component and is specifically designed for P2P streaming with functionality for coping with jitter and differing network conditions included in the connection by default to ensure a consistently good quality video stream. The component is only setup to accept audio and video streams however Chrome already supports sharing the browser window as a video stream and it’s possible within the specification that this could evolve to sharing just the canvas or game window on a browser by browser basis which would increase the game applications of this connection. In the meantime however sharing of game screens isn’t universally possible via RTCPeerConnection and instead RTCDataChannel is what should be really exciting, as this simply opens up a P2P connection between browsers purely for sending packets of data back and forth, the specification allows for “reliable” and “unreliable” connections which can be considered synonymous with “Transmission Control Protocol” and “User Datagram Protocol” meaning that this component can be considered useful to games requiring fast connections and reliable connections.

When combined with the camera feed there are some potentially interesting applications for the full WebRTC package. More and more companies are looking at games as an educational tool and the platform offered by WebRTC would make for an inexpensive and potentially easy to customise way of bringing these games to employees who aren’t familiar with traditional gaming environments. In particular when dealing with soft skills such as teamwork and communication the ability to tightly control what information is available to each player and allow audio or even video based communication between players offers a lot of potential not only for enhanced learning but also for a greater degree of information to be passed back to trainers. In fact it’s quite possible that the soft skills many employers desire could be included in WebRTC games in such a manner that people want to play the game for the game itself rather than the potential benefits, which is practically the holy grail of game based learning.

Continuing the educational theme WebRTC could be a brilliant platform for bringing games into the classroom at a young age, encouraging social skills amongst children at a very young level whilst providing a traditional early learning curriculum in the form of collaborative mini-games, this is very similar to what Edinburgh based games company Tigerface Games currently does, however these require a child on each side of an iPad collaborating on solving problems, which is expensive enough at home and it’s currently quite unlikely that a school is going to be able to splash out on an iPad for every 2nd child in a class. However this browser based technology would allow children to work with existing computer labs or even from the comfort of their own home with classmates to develop basic literacy and numeracy, all the while being further introduced to technology at a young age which would begin to address to the decline in skill-sets of students seeking Computer Science degrees. A significant problem in the UK which has been acknowledged even at the highest levels of top companies such as Google and Microsoft, leading Google to donate 15,000 Raspberry Pi’s to UK schools. Interestingly, the Raspberry Pi would be perfectly capable of running WebRTC games with the installation of a free Linux distribution and a cheap web cam.

However education isn’t the be all and end all of WebRTC gaming, sometimes people just want to have fun and that can be further advanced by including the networking component, when playing the application with this game, depending on how far the player gets, they may notice that they find themselves making some silly poses in order to reach all of the tokens in time. In fact games like Start the Party (Sony Computer Entertainment, 2010) for Playstation 3 took advantage of this collecting screenshots of the player as they went through the game, then using them to provide a quick slideshow for some laughs at the end. Similarly, in bonus rounds the game permitted players to vandalise fellow players avatars with a free paint tool. In WebRTC pictures such as this could be forwarded through a Data Channel via the Data URI format or in fact with the users approval could be shared on social networks, which helps to promote the game and also allows friends to laugh over shared experiences long into the future.

So go! Have fun, experiment - and create awesome need motion controlled, AR, educational Web Games and let me know all about it below!

References



Barbour, C., 2013. Level Creation through Augmented Reality, Dundee: University of Abertay Dundee.

Bergkvist, A., Burnett, D. C., Jennings, C. & Narayanan, A., 2013. WebRTC 1.0: Real-time Communication Between Browsers. [Online]
Available at: http://dev.w3.org/2011/webrtc/editor/webrtc.html
[Accessed 27 April 2013].

Cambridge University, 2009. Opengazer: open-source gaze tracker for ordinary webcams. [Online]
Available at: http://www.inference.phy.cam.ac.uk/opengazer/
[Accessed 1 May 2013].

Google Chrome Team, 2013. WebRTC. [Online]
Available at: http://www.webrtc.org/
[Accessed 27 04 2013].

Moth, D., 2011. Online gaming sees massive growth: infographic. [Online]
Available at: http://econsultancy.com/uk/blog/8536-online-gaming-sees-massive-growth-infographic
[Accessed 27 04 2013].

Plunkett, L., 2013. Detailed Specs On The PS4's New Controller And Camera. [Online]
Available at: http://kotaku.com/5985814/detailed-specs-on-the-ps4s-new-controller-and-camera
[Accessed 27 04 2013].

Sony Computer Entertainment Europe. 2003. EyeToy: Play. [disk]. Sony Playstation 2. SCE London Studio

Sony Computer Entertainment. 2009. EyePet. [disk]. Sony Playstation 3. SCE London Studio

Sony Computer Entertainment. 2010. Start the Party! [disk]. Sony Playstation 3. Supermassive Games

Special Effect, 2013. Special Effect. [Online]
Available at: http://www.specialeffect.org.uk/
[Accessed 16 05 2013].

W3C, 2012. HTML5 Web Workers. [Online]
Available at: http://www.w3schools.com/html/html5_webworkers.asp
[Accessed 1 May 2013].

Wakefield, J. & Rich, L., 2013. Google to give schools Raspberry Pi microcomputers. [Online]
Available at: http://www.bbc.co.uk/news/technology-21243825
[Accessed 1 May 2013].

No comments :

Post a Comment