Design and Development of Gesture Based Gaming Console
Sarika Chaudhary1, Shalini Bhaskar Bajaj2, Aman Jatain1, Pooja Nagpal1
1Assistant professor, Amity University, Gurugram, India.
2Professor, Amity University, Gurugram, India.
*Corresponding Author E-mail:
ABSTRACT:
Game controllers have been planned and improved throughout the years to be as easy to understand as could reasonably be expected. A game controller is a gadget utilized with games or theatre setups to give contribution to a computer game, commonly to control an item or character in the game. Information gadgets that have been named game controllers incorporate consoles, mice, gamepads, joysticks, and so on. A few controllers are intended to be purposely best for one sort of game, for example, guiding wheels for driving games, move cushions for moving games, and light firearms for firing games. The aim here is to create a virtual environment, where the user is appealed by various gesture controls in a gaming application. A Gesture is an action that has to be seen or felt by someone else (here a PC) and has to convey some piece of information. Now obviously, to create a virtual gaming environment, we need to create a real-time gaming application first. We’ll be designing our 2D and 3D gaming applications through Unity 3D video game engine. The data used in this project is primarily from the Ego Hands dataset. After an input has been taken, and the consequent action has been performed, we’ll use this activity for future development of the model by using Tensor-Flow. The input will be taken through the webcam of the PC which will be accessed and combined to the gaming application and hands dataset by WebGL. WebGL is a JavaScript API for rendering interactive 2D and 3D graphics within any compatible web browser without the use of plug-ins.
KEYWORDS: Gaming Console, Gesture, TensorFlow, Sensor, WebGL.
INTRODUCTION:
Nowadays virtual environment is considered as a means of efficient human interaction. This is defined by the diversified field of application. The range of applications include phobia therapy, military simulation, medical training etc., We decided to create a button-less model, where we give the commands or inputs to the game by performing gestures in the air. For this, we’ll need the help of sensors. The progressive advancements in the field of electronics have led to a still more widening of the spectrum of human computer interaction. The user interface approach of using keyboard, mouse, pen are not catching up to the race. The utilization of hand motions as an information strategy gives human PC cooperation. This will be valuable in controlling gaming applications utilizing hand motions. Game theory is a branch of mathematics that can be utilized to dissect framework tasks in decentralized and self-arranging systems. It depicts the conduct of major parts in a game. Players might be helpful or non-agreeable while endeavouring to boost their results from the game. In such manner sensors deal with their activities regarding power assets gave to detecting and conveying among themselves and with a worldwide regulator to such an extent that the allocated errand could be finished successfully as wanted. In this current work, hand signals are utilized to control 2D and 3D gaming applications.
LITERATURE REVIEW:
Keogh B 1 performed a study that depicted that players get more delight from computer games. Additionally if these computer games utilizes movement and signal regulators which may incorporate Nintendo Wiimote, Microsoft Kinect and PlayStation Move or more than from videogames that utilization more customary catch and trigger-based regulators. Pirker J 2 introduced the Leap Motion- a little gadget. This gadget is to be set confronting upwards with client's console or PC. It use two infrared cameras having a capacity of catching up to 200 casings for every second. Contrasted with Microsoft's Kinect it has a higher movement goal. The controller is primarily marketed as a device. Later on it was integrated into laptops and other devices by many manufactures like HP. The virtual world enhances with accurate motion detection by use of Leap motion mounted over. The controller is connected to the PC via USB. A software suite is required to be installed, which contains different playground applications and mini games having simple interactions. These interactions may include activities like picking flower leaves and positioning cubes. This paper investigates the Leap Motion controller as motion controlled information gadget for PC games. Human Motion or gesture based connections into two distinctive game arrangements to investigate the appropriateness of this information gadget for intuitive diversion with centre around convenience, client commitment, and individual movement control affectability, and contrast it and customary console controls is coordinated. In a first client concentrate with 15 members, they assessed the involvement in the Leap Motion controller in the two diverse game arrangements. The examination results demonstrated ease of use issues. This creates an exhausting experience after around 20 minutes. While the reasonableness for conventional computer games is in this manner depicted as restricted, clients see potential in signal based controls as preparing and restoration devices.
Roccetti M3 discussed important patterns are unobtrusively rising in the area: game originators, are gradually moving their consideration out of the dividers of gaming fan homes, expanding their inclinations to PC games that can be played in broad daylight spaces, as shows and galleries. Just a restricted measure of examination encounters have considered the issue of delivering PC games, in view of signal based interfaces that well suit such settings 5. The issue of separating the structure of a signal based interface for a reassure from the issue of planning it for an open space setting has been tended to in this paper. Specifically, it is portrayed the structure and usage of an interface that well suits open vivid situations, since it depends on a straightforward and effective arrangement of calculations which, joined with the insight given by the information on the setting of where a game is played, prompts a quick and hearty translation of hand signals6. After realizing what the market currently offers, we thought on this idea of implementing gesture-based gaming using WebGL making the model platform independent. We expected the device to come out as cheap as possible so that it could be mass produced and can be used for number of purposes.
The aim here is to create a virtual environment, where the user is appealed by various gesture controls. A button-less gaming console model is desired to avoid the unnecessary stoppage due to discomfort, and taking the technology in our hands to the next level. This will be achieved by recording user’s activity in different types of sensors7,8,15,16,17.
1). Ultrasonic Sensor 18: It measures distance between an object and the sensor by using ultrasonic waves.
2). Infrared Sensor4: It measures the heat of an object as well as detects the motion.
3). Time-of-flight camera (ToF camera) 11,12,13: It is a range imaging camera system that employs time-of-flight techniques to resolve distance between the camera and the subject for each point of the image.
After an input has been taken, and the consequent action has been performed, we’ll use this activity for future development of the model by using Tensor-Flow. Tensor-Flow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources. Now obviously, to create a virtual gaming environment, we need to create a real time gaming application first 9,10. This application is designed through Unity 3D video game engine. The language used here is C# (pronounced as C-sharp).
A Gesture is an activity that must be seen by another person and needs to pass on some snippet of data 14. Signal is normally considered as development of part of the body particularly a hand or the head, to communicate the thought or importance. Yet, in this work we are just taking the hand developments in thought. Inspiration for this work originated from a crippled individual who was driving his wheel seat by hand with a considerable amount of trouble. So it was needed to make a gadget which would enable such individuals to drive their seats without wanting to contact the wheels of their seats. Another Objective of this application is to make this gadget straightforward just as modest with the goal that it could be mass-delivered and can be utilized for number of purposes.
PROPOSED METHOD:
The gesture based gaming console comprises of three major components, gaming applications based on WebGL using Unity Video Engine, hands dataset from Ego-Hands, and model training using Tensor-Flow.
Flow of control and data
The control and data flow that describes the use of various sensors to achieve the target objectives along with steps taken to obtain an accurate classifier is illustrated in figure 1.
Figure 1 – The control and data flow of the model
First, an input image of a hand is taken through the PC camera module. Then, the convexity defects of the hand contour are found. It can be depicted as the calculated difference between the convex hull and the contour. The convexity defect is defined as the points farthest from the convex points. So, if the finger tips are considered as the convex points, the trough between the fingers can also be considered as convexity defects. After these convexity defects are found, the fingertips and fingers are determined by studying and analysing the defects and angles. Now that all is in place, hand gestures are identified and recognized using the pinkie, index or thumb. This is how the input is taken, and the corresponding actions is done.
Hands information taken from Ego Hands: The information taken in this work is basically from the EgoHands dataset. This comprises of 4800 pictures of the human hand with jumping enclose explanations different settings (indoor, outside), captured utilizing a Google glass gadget.
Model preparing: A model is prepared to identify hands utilizing the Tensor-flow Object Detection API. For this venture, a Single Shot MultiBox Detector (SSD) was utilized with the MobileNetV2 Architecture. Results from the prepared model were then traded as a spared model. Extra subtleties on how the model was prepared can be found here and on the Tensorflow Object Detection API GitHub repo.
WebGL:
WebGL is a JavaScript API for delivering intuitive 2D and 3D illustrations inside any viable internet browser without the utilization of modules. WebGL is completely incorporated with other web principles, permitting GPU-quickened use of material science and picture preparing and impacts as a major aspect of the page canvas.
RESULTS:
The dataset of the hands and the webcam can be accessed and combined with the code of the games through WebGL. This makes the model cheaper and a lot easier to use for the mass audience. Rather than buying and using all the different kinds of sensors required for gesture controlling, the inbuilt sensors in one’s PC can be used as an alternative for a low budget gesture controlled device.
The basic objective of this game is to keep playing as long as one can, pass the levels as they come along. The game itself is very simple. A ball is stuck in a clock. We have to prevent it from touching the hands of the clock. The hour-hand of the clock keep changing its pace and direction randomly. As the player presses a button, or in this case makes a gesture, the ball changes its direction as well and is safe from touching the hour-hand. This keeps going on until the player is unsuccessful to avoid the hour-hand. As we go along, the pace of the hour-hand also increases which makes it harder for the player to avoid contact. The basic platform designed is shown in figure 2.
Figure 2. Designed WEBGL platform
A model is trained to detect hands using the Tensor-Flow Object Detection API. For this project, a Single Shot MultiBox Detector (SSD) was used with the MobileNetV2 Architecture. Results from the trained model were then exported as a saved model. The screenshot for the same is shown in figure 3 and 4.
Figure 3. Model training
Figure 4. Final layout of the model
CONCLUSION:
The objective was met with no extra hardware required. Only a sensors- enabled PC with working webcam is the prerequisite of this model. The code for the 2D and 3D games can be written on Unity using C#. These codes in multiple modules will be combined with the dataset from EgoHands which improves efficiency of the input to the webcam using Tensor-Flow. The dataset and the gaming application are then accessed by the webcam of the PC, through which the user can input their command for respective movement in an application. Thus bringing out a cheap and ready to make at home gesture based gaming console model. This model can be developed by a normal techie guy with some knowledge about Machine Learning concepts and gaming modules in Unity.
CONFLICT OF INTEREST:
The authors have no conflicts of interest regarding this investigation.
REFERENCES:
1. Keogh B., “Do gesture-based controllers push the right buttons for gamers? “(2012)
2. Pirker J. et al, “Gesture-based Interactions in Video Games with Leap Motion Controller”, International Conference on Human-Computer Interaction (2017)
3. Roccetti M. at al, “Playing into the wild: A gesture-based interface for gaming in public spaces”, Journal of Visual Communication and Image Representation 23(3):426-440 (2011)
4. K. Hicks and K. Gerling, “Exploring casual exergames with kids using wheelchairs,” in Proceedings of the 2nd ACM SIGCHI Annual Symposium on Computer-Human Interaction in Play, CHI PLAY 2015, pp. 541–546, New York, NY, USA, October 2015.
5. S.Chaudhary and P.Nagpal, “AI Based Traffic and Automobile Monitoring System” International Journal of Innovative Research in Computer Science and Technology (IJIRCST) Volume-8, Issue-3(2017).
6. K. M. Gerling, R. L. Mandryk, M. Miller, M. R. Kalyn, M. Birk, and J. D. Smeddinck, “Designing wheelchair-based movement games,” ACM Transactions on Accessible Computing (TACCESS), vol. 6, no. 2, 2015.
7. Jaspreet Kaur, Rajdeep Singh Sohal. Multi Sensor based Biometric System using Image Processing. Research J. Engineering and Tech. 2017; 8(1): 53-62.
8. Himani Jerath, Kavala Kotesh Phani Rohith. EMG Sensor based Wheel Chair Control and Safety System. Research J. Pharm. and Tech. 2019; 12(6): 2730-2735.
9. Vishal Khilari, Akash Phadatare, Aman Samarth. WEARTRONICS- A Review of Wearable Technologies in Smart Textiles. Research J. Science and Tech. 2017; 9(4): 675-685.
10. Z. Mary Livinsa, G.Mary Valantina. GHR Monitoring with RSSI Tracking System for Alzheimer’s Disease Patients. Research J. Pharm. and Tech 2019; 12(1): 280-282.
11. Ira Shukla, V Suneetha. Biosensors: Growth and Market Scenario. Research J. Pharm. and Tech 2017; 10(10):3573-3579.
12. Rajesha N, HL Viswanath. Increase Flash Memory in MMULess Embedded Systems. Research J. Engineering and Tech. 5(4): Oct.-Dec., 2014 page 208-216.
13. Ankita B. Kamble, Komal K. Wankhede, Shephalee A. Bagadte, Kunal Purohit. Path Recognizer for Blind Person.Int. J. Tech. 2016; 6(1): 11-13.
14. S. N. Shivappriya, R. Dhivyapraba, A. Kalaiselvi, M. Alagumeenakshi. Telemedicine Approach for Patient Monitoring System using IOT. Research J. Engineering and Tech. 2017; 8(3): 233-236.
15. T. R. Sanodiya, Piyush Jha. Mechanoluminescence of NaAlSiO4:Eu, Dy phosphor for developing impact sensor. Research J. Engineering and Tech. 2017; 8(4): 311-314.
16. A. Narmada, P. Sudhakara Rao. RFID Integration with Wireless Sensor Networks. Research J. Engineering and Tech. 2018;9(2): 207-213.
17. Nagpal P., Chaudhary S. (2020) Health Monitoring Multifunction Band Using IOT. In: Dutta M., Krishna C., Kumar R., Kalra M. (eds) Proceedings of International Conference on IoT Inclusive Life (ICIIL 2019), NITTTR Chandigarh, India. Lecture Notes in Networks and Systems, vol 116. Springer, Singapore.
18. Chaudhary, Sarika, and Pooja Batra Nagpal. "Live location tracker." Global Research and Development Journal for Engineering 4, no. 10 (2019).
Received on 14.05.2021 Accepted on 19.06.2021 ©A&V Publications all right reserved Research J. Engineering and Tech. 2021;12(2):51-56. DOI: 10.52711/2321-581X.2021.00009 |
|