Friday, October 18, 2013

Augmentation and Telepresence #throughglass

Talk by Martin Wojtzcyk and Devy Tan-Wojtczyk

Augmented reality is when you enhance reality with computer-generated images, allowing for one to either see or be put into a virtual reality.
Telepresence is the use of virtual reality technology to transmit video and/or audio to communicate with others.

To program for Google Glass, there are a couple methods:
Using Google's Mirror API connects Glassware (HTML) with Google and then to Glass. This allows for easy programming and implementation but causes a lag between the device and the database where your program would lay.
Using GPK, developers would be able to program directly for Glass and upload their programs directly to the device to allow for real time zero-lag environments.

There are many use cases that we are tackling through our startup.
Shopping
Map overlays allow for people to find the store and will navigate to the shop. Overlay changes to on-screen display showing products to purchase. When selected, Glass navigates shopper to the item and describes prices, reviews of products.
Telepresence
Telepresence allows management to contact workers in factories around the world and see through hangouts what the workers see. This enables management to keep track of projects while being in another country.
Telepresence also allows robotics to be implemented in labs to allow for sample checking from home. This also allows for exploration and rescue to be manned from safety.

Resources:
Mirror API
https://developers.google.com/glass/overview
Google ask samples
https://github.com/googleglass
OpenCV4Android SDK
http://docs.opencv.org/doc/tutorials/introduction/android_binary_package/O4A_SDK.html

Functional Fashion and the Future of Mobile - Wearable Android

Wearing Android

The future of mobile devices is wearable



Talk by Siamak Ashrafi, researcher at YLabs

Embedded devices are all around us in card readers, registers, etc. This is the wrong way of doing things.

MEMS sensors are so prolific that you will find them in all mobile devices. The sensors include acceleration, temperature, gravity, gyroscope, light, magnetic field, orientation, pressure, proximity, humidity, vector, camera, microphone and touch. Antennas include CDMA, GSM WiFi, NC and Bluetooth. Sensors make a smartphone close to self aware. Code that includes gravity, accelerometer and linear accelerometer allow devices to sense if you have taken a fall and will notify those needed. 

These sensors allow for seeing, hearing and feeling through camera microphone and touch. This sensor data, coupled with Android, utilizes Google Cloud Messaging to pair sensor suites with devices to connect to the Internet and send the data to anything. Near Field Communication tags allow for quick transfer of information including Bluetooth handshakes, Credit Card information and Wi-Fi passwords. These NFC tags only work when in extremely close proximity, aka skin-to-skin. These low energy devices allow for a bevy of Wearable devices.

The uses of device-based objects are shown in the differences between Rovio and Romo:
Rovio
Device required specialized hardware and components and was late-to-market because of problems with the Specialized hardware.
Romo
Device added wheels to an iPhone and was able to do the same as the Rovio for a fraction of the cost and without the problems thanks to the hardware being already vetted and tested by Apple.

Mind versus Body shift
We have been shifting from using our body to survive to using our mind to survive and our bodies have not adapted to the current level of inactivity and has led to numerous problems.

There are two types of Android devices on the market, those that run Android and those that only utilize Android.

Devices that utilize android but do not run it include the Jawbone Up, Lark, FitBit and the Nike+ Fuelband. The Jawbone Up tracks your sleep, activity, food and drink, mood and others but only mentions what it has been doing when you plug it in. Lark does a lot of what the Jawbone up does but includes a small display while the FitBit includes a large enough display to show a clock and other data in future devices. Nike+ FuelBand only works on iOS but includes a display akin to the FitBit.

Watches
Pebble Watch: Runs other OS, Black and White
Sony Watch: Runs some Android, Color
Samsung Watch: Runs full Android, same hardware as top-level devices a year ago.
Phones
Apple M7 Motion Co-Processor
Glasses
Glass UP: Read-only and includes mail, messages, text messages and notifications.
Recon Jet: Android without Google software
Google Glass

Applications allow new technology to succeed by allowing anyone to adapt and utilize something new. Glass is the next device that won't be left at home.  While Glass is currently in the Explorer Edition, it will change to be more attractive and fashionable in the future. The hardware includes all of the pieces except for the cell antenna, keeping the high-power devices away from your head.

Glass originally used Google's Mirror API to connect to AppEngine to develop applications, now you can write through the GDK to directly program apps. While the GDK allows for deeper implementation, the Mirror API lets programmers unfamiliar with Android to develop with all of the tools needed to get information on Glass.

For Glass, one primary feature is the built-in augmented reality. This required a combination of powerful processors and sensors that wasn't accessible by the public until today's Smartphones.

Android and Bluetooth Low Energy

Talk by Dario Laverde, Senior Developer Evangelist for HTC

Bluetooth Low Energy began in 2001 was introduced with Nokia's Wibree and merged with the main standard in 2010 with Bluetooth 4.0.

The Bluetooth 4.0 core specifications include the classic Bluetooth technology, Bluetooth Low Energy and Bluetooth SMART. Hardware-wise Classic Bluetooth is limited to a 100 meter range with a transfer rate of one to three Mbit/s and uses under 30 mA while BLE utilizes a max range of 50 meters, has a 1 Mbit/s transfer rate and uses under 15 mA of power.

We have seen Bluetooth Low Energy used in health, fitness and proximity but there are many more possibilities. These uses have their own specifications known as BLE Profiles. These are for particular applications and manufacturers are expected to implement specs to ensure compatibility but are not required by the core specification itself.

Some profiles include:
HRP: Heart Rate Profile for pulse monitoring
HTP: Health Thermometer Profile for medical temperature measurement
GLP: Glucose Profile for blood glucose monitors
CSCP: Cycling Speed and Cadence Profile
RSP: Running Speed Profile
FMP: Find Me Profile
PXP: Proximity Profile that allows the device to detect if another device is near

BLE manufacturers are encouraged by Bluetooth SIG to create new profiles and extend others with proprietary characteristics to expand the original ones. This does create difficulty, however, with app developers having to contact the 3rd party manufacturers for specifications. BLE Profiles sit on top of the Generic Attribute Profile on the Host and shares the same radio with the classic Bluetooth but has many layers removed to make it more efficient.

BLE Protocols
General Attribute Profile (GATT)
Phone, the master/central role has the GATT Client, and the BLE Smart device, which is Slave/Peripheral and contains the Service and the Characteristic with the value and descriptors. There are some roles left out including the Broadcast Role. Android BLE only supports the central role but IOS supports both.

BLE GATT Clients and Servers
Servers expose the attributes while the clients use them. The Client Actions include discovering, reading, writing, confirming and indicating while the Server Actions include responding to the client actions, sending notifications and indicating.
To find more information, go to the Bluetooth SIG portal.

BLE Developer Device Kits
BLE chipset OEMs provide device kits which support different profiles. Texas Instruments' Sensorial has a bevy of sensors including acceleration, rotation, pressure, humidity, magnetic field and temperature.

Prior to Android 4.3, several OEMs already have devices on the market that support BLE and each provide their own BLE API including HTC, Samsung, Motorola and others. The BLE API was formally announced at Google IO 2013 and includes API Level 18. In most, if not all, cases, you will build on top of the GATT profile so your app must include the implementation of your required BLE profiles.

Demos
Scanning works slightly different than Classic Bluetooth in that it has its own callbacks.
It is more efficient in that by searching for Low Energy devices allows much faster scans compared to a scan for both classic and low energy Bluetooth devices. Sadly a known issue in Android is that, for the time being, it will only allow four sensor notifications.

Q & A
Resources/References
https://d.android.com/guide/topics/connectivity/bluetooth-le.html
http://processors.wiki.ti.com/index.php/Bluetooth_SensorTag
http://www.htcdev.com/devcenter/opensense-sdk/bluetooth-smart
Bluetooth SIG, Google IO 2013, TI.com/ble,
https://github.com/RadiusNetworks/android-ibeacon-service
contact info: dario.nycjava@gmail.com

Opening!


Greetings Google DevFest West 2013!

We'd like to start off by thanking Google for providing the facilities, room and the teeshirts and other prizes, GDG Silicon Valley and GDG San Francisco for putting on the event and Josh for doing the website.


To view the schedule as well as look at a couple of the talks go to http://www.devfestwest.com/2013 .
For those at the talks, lunch will be on the patio behind the building and there will be a break for lunch and dinner that will be catered. Talks will be held in building CL2 (Crane Beach Room) and CL3 (Maxwell Room).

Thanks for coming and have a great DevFest!

Setting up for DevFest West!

Badges and lanyards out, volunteers prepped, we are ready to go! We will have registration inside Building CL2's lobby (employee entrance). Open the floodgates!

Friday, September 13, 2013

Intro to App Engine and Cloud Computing

Wesley Chung is an engineer, writer, teacher, and was one of the original engineers who built Yahoo! mail. He stopped by our monthly GDG meetup to give everyone an introduction to Google App Engine and cloud platform.


Cloud Platform

Google got involved in cloud computing out of necessity. Google's search index last year exceeded 100 petabytes (100 million gigabytes). Roughly 4 days of video are uploaded to YouTube every minute. We don't know what percentage of that is cat videos.

The cloud technologies Google provides for you to use is an extension of the work they did tackling those internal problems. App Engine, Compute Engine, platform as a service. There are three pillars to their offerings: the compute side, storage, and application services. These combine to give you a platform to deploy your application without the usual worries of load balancing, availability, operating system, and all the little details that typically take up your time to bring your application to deployment worldwide.

Even better, the cloud platform provides APIs for networking functions that usually require extensive configuration, like email, database access, URL feeds, cloud storage, authentication, etc etc etc.

You get a full featured administrative console with too many features to list here. You'll have to explore it yourself. It's impressive.

App Engine

App Engine serves 7.5 billion page views every day. That's a "very googley" number.

Buddy Poke is a cutesy app that's been around almost as long as App Engine itself and is a good example of long term scaling, both in terms of registered users and daily use.

Gigya builds apps for special events, which is an example of short term scaling. Traffic spikes before and during the event, then dies off.

The royal wedding (Will & Kate) blog was hosted on App Engine. 32,000 users were hitting this app every second, peaking at 42kqps (kilo queries per second) when the royal couple kissed. You might call this "Royal Scaling."
This really tested App Engine's scalability

There are samples and tutorials available for web and mobile apps on App Engine, and a showcase of what other developers have built. There is a free tier if you want to try it (it only starts to cost money when you start to get significant traffic; by that point I hope you're making some money off your app), and don't worry about vendor lock-in. You can always move your application to another platform if you want.

Getting Started

You can play around with app engine on a web based SDK (no download required) at the Cloud Playground. Follow the link, clone one of the sample projects, and try it out.

Cloud Platform - Bringing it all Together

App Engine is the gateway to all other Google cloud services: databases (MySQL compatible and NoSQL), Google Translate, BigQuery, Google Prediction API, on demand virtual machines (RHEL/CentOS, Debian), local persistent storage, public cloud storage, there's even a REST API to create and manage VMs. Of course you can incorporate all the other Google APIs into your applications. Read more at the Cloud Platform Blog

That's all for this month. Thanks to everyone who attended and watched the talk on our YouTube channel. Happy hacking!

Friday, August 16, 2013

Build Native Apps with HTML5

Joe Marini came by to show us some neat tricks on the Chrome Packaged Apps Platform, building native apps in HTML5.



We had a huge turnout for this talk!

The public release will include native launch points for each OS (the Mac doc, Windows start menu, etc). Packaged apps run offline as well as online, they run locally and all the code (HTML5 and JS) is stored locally.

He showed some cool demos, including 500px, a code editor with syntax highlighting, and a magazine. The apps can use the native windows controls if you want, or you can build your own using webkit regions.

Packaged apps are an attempt to solve problems inherent in web applications, like intermittent connections. Most web apps have functions that should work offline, but they don't. With a packaged app you can retain offline functionality even if features requiring connectivity won't be available. Packaged apps also have access to platform capabilities and hardware (USB, bluetooth, sockets, etc). Imagine playing a web game with any controller you want. You can create a richer, more immersive experience, with distribution and updates managed through the chrome web store. Packaged apps also support in-app purchases with Google Wallet, as well as analytics.

Joe went into a deep dive into the packaged app structure. One interesting point is the differentiation between online and offline features, and developers need to think about what local resources are required to allow offline features to function in the absence of an Internet connection.

Want some specifics? No plugins! No Flash, no Java. You can embed native code using the HTML5 native client, but it can't use local storage because it is a synchronous API, and a synchronous API call can block the main thread causing your app to hang. You still get the functionality of local storage, as an asynchronous equivalent called Chrome Storage is supplied to be used instead. You can also leverage Chrome's cloud features, such as the Sync API and SyncFileSystem. It supports Google Drive out of the box, and is extensible for use with other services like DropBox.

Joe gave a lot of great pointers on how to build great packaged apps and some live code examples.

Like Dart? There is a chrome.dart github project, and as soon as Dart hits v1.0, they will start working on Dart for packaged apps.

Packaged apps are build with HTML5 and Chrome tools, but they live outside the browser, allowing for more developer options and a richer user experience. It was standing room only for this talk. Pretty cool stuff.

Joe is a developer advocate for the Chrome team, you can find him on twitter and github:
@joemarini
github.com/joemarini