D. Smith

Author
+ Follow
since Jan 11, 2013
D. likes ...
Android IntelliJ IDE Objective C
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
5
In last 30 days
0
Total given
0
Likes
Total received
1
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by D. Smith

Now, I know MiFare is a company which makes NFC chips.



That's not quite true. MiFare is an NFC technology standard that many NFC products are based on. Today, the MiFare tech is pretty much wholly owned by NXP Semiconductor (which IS a company that makes NFC chips and smart cards).

The standards used in the NXP MiFare series of products is pretty much completely supported by Android devices, as evidenced by the two applications they have in Google Play for reading and programming tag elements:
https://play.google.com/store/apps/details?id=com.nxp.nfc.tagwriter
https://play.google.com/store/apps/details?id=com.nxp.taginfolite
11 years ago
Ashish -

Most of the designers we work with use Photoshop as their main tool to create UI layouts and cut graphic assets, but this is for traditional app development. Game development studios may have entire art departments working on the assets for a game, depending on how complex is it. The number of designers that would be needed to complete your project depends 100% on the time you have to complete the work and how complex the application's UI is. I don't have many tips in regards to game development, there is another book written by an Apress author that deals with the issues of developing games specifically called Beginning Android Games that you may want to investigate: http://www.apress.com/9781430246770

However, for the more traditional pieces, like icons and backgrounds, there are tools on the web and in the Android SDK. The best site I'm aware of is the Android Asset Studio, which was created and is maintained by a developer on the Android team at Google. You can use this tool to create application icons and other icon assets you may need, and they will be scaled for all device densities. Last time I checked, much of this functionality has been added to the ADT plugin for Eclipse as well when you create a new project.
http://android-ui-utils.googlecode.com/hg/asset-studio/dist/index.html

There is also a tool in the SDK called draw9patch that can assist you with creating a 9-patch image from a static PNG. 9-patch images are small assets designed to stretch to fill the space required on any given device. You can find out more about them from the Android developer documentation, as we try to cover the basics of creating flexible graphic assets in our book as well.
http://developer.android.com/tools/help/draw9patch.html
11 years ago
"Problem" may not be the right word to describe this, but device variance is an issue you need to consider when developing an Android application. Android runs on a wide variety of device hardware and along with that comes a handful of different screen size/resolution/aspect ratio combinations. I would say that the number of device screens to support does not even begin to match the number of Android devices on the market, most all devices (certainly those running Google Play) will fit into about 2-3 types for handsets and 2-3 types for tablets.

The native Android framework works very hard to provide you with the tools to develop your application's user interface in a way that will be flexible and scale to accomodate the different screens. The more you can utilize the resource framework Android provides to select appropriate layouts, images, dimensions, etc. that best fit different screen types, the less work supporting the Android ecosystem will become for you as a developer. This means, above all else, designing your UI in a flexible manner. Here are a few thoughts on this:

  • You can fix the size or position of an element, but not both.

  • It is okay to provide fixed assets in your UI (i.e. a button must be this size to fit the background I've created for it, or this element should always be 10dp from the top of the screen) but avoid trying to do both. If you must fix the size, allow the location to float appropriately so that extra space on larger screens is used appropriately.

  • Where possible, use scalable image assets

  • Creating a static image for a button background may not be the best approach if you want that button style to wrap the text you put in, as that background will stretch and skew in many cases. The Drawable class and 9-Patch graphics are your friend here that can allow you to create graphics in your application that are not dependent on a fixed pixel size.

  • Use scaled dimensions always

  • In the first point I mentioned "10dp", which is a scaled dimension unit Android provides. By using these values to declare fixed sizes/positions instead of direct pixel values, Android will do the work of making that dimension look correct based on the density of the device's screen.

    In response to your last question, there are several recipes in Chapter 2 that assist you in seeing how the resource system and flexible graphic assets can help you build a great native application that isn't dependent on any one device type.
    11 years ago
    Thanks everyone. It's been fun spending the week here in the forums.
    11 years ago
    It's not a programmable thing, sync frames are a concept of encoding video. Encoded video files don't store every video frame of data to keep the file size down. They only save specific frames known as "key frames" or "sync frames" as a full frame, and then just the pixel deltas for frames in between. This is why, by default, sync frames are all the retriever returns; it's easier to find them and extra decoding doesn't need to be done. How many sync frames are inserted in the video file is up to the application that encoded it (on the desktop, it's usually a programmable setting of the encoding software) and I'm sure it's a fixed value on the device.

    It's theoretically possible that a video file have only the initial frame fully rendered, although unlikely as playback would be very slow. However, having issues with the MP4 I used rules out the camera's recording capability as the problem.

    My code does not create any IMG files.



    Interesting, in your initial post you mentioned you were getting only one initial frame, now you are getting zero. I wonder how just a different movie file could affect things in that way...are you able to get your code working in the emulator?
    11 years ago
    The video sample I just pulled from the web, the file is at the following link: http://dev.exiv2.org/attachments/345/04072012033.mp4

    That same site has other video files as well: http://dev.exiv2.org/boards/3/topics/1189
    11 years ago
    You can use Intent flags to manage how a new Activity is launched; things like pulling an existing instance from within the stack rather than creating a completely new one, or modifying how the new instance is tracked in the stack history. However, there is no flag that allows you to insert a new Activity in between two existing Activity instances in the stack...the new Activity you launch will always be on the top of the stack. Allowing this behavior at the framework level has too many consequences that can break result delivery chains that may exist from one instance to another.
    11 years ago
    How is the data getting from the keyboard on the device into the write() method? It looks like each character you type is getting interpreted as four bytes (perhaps cast to an int along the way?) before being passed to this method. Also, what does the sendOverSerial() method look like? Is that a method in the library you are using? I could not find GraphicsTerminalActivity in the GitHub source code.
    11 years ago
    Android doesn't really provide you the control over the Activity stack to do anything like this directly. If you are allowing a user to interact with a paused Activity while it is in that state underneath another Dialog themed Activity (a pattern that really isn't recommended), then you will have to implement logic in your application to finish the current "Dialog" and re-display it again after the new Activity has launched (persisting its state somewhere so it can show up looking the same as it did before you closed it). The "Dialog" can either finish itself when it detects that it has been paused (because your new A2 or A3 has come over the top of it) and that new Activity can be responsible for launching a new version of the "Dialog" in onCreate() ...OR... You might be able to use the finishActivity() method from the paused Activity to force finish the "Dialog" before launching the next Activity, which will then automatically re-display the "Dialog".

    The above is all very hacky because it truly is using the Activity stack in a way that is was not designed to be used. Another approach you might consider is to implement this application with Fragments instead. Using Fragments, you can swap out (using the replace() method) the screen section underneath because you define the container view location where a Fragment hierarchy should load. The "Dialog" can also be a Fragment that just displays itself inside of a different container location.
    11 years ago
    I have taken and run your original code, along with the modifications you made, on my Nexus S device and everything basically works as expected. Depending on the options selected, a 14s test video produces around 24 images (OPTION_CLOSEST_SYNC) or 240 images (OPTION_CLOSEST). I do not have Droid X2 hardware, but I also ran this code in the Motorola Droid X emulator, which uses the same software image so the performance should be comparable, and the results where the same there as well.

    This leads me to believe that the issue lies either with the video you are reading or with that device, because I cannot find any further fault with your code and it seems to be working from the tests I've run.
    11 years ago
    A couple thoughts:

  • Since presumably the value of max is 7100 (no errors parsing into an int), then only the first seven iterations of the loop would actually pull a frame time inside the video's duration. With index = 8, the requested frame time is already past the video duration at 8 seconds (8,000,000us), index = 10 -> 10 seconds, etc. So all subsequent 7090 iterations probably continuously pull the last readable frame; perhaps try and change the requested time so you get better granularity of presentation time values (i.e. so all values of the loop are within the duration). Technically, though, we would expect at least a few of the first seven requests to be different frames (unless the video really has very few sync frames in it), so...
  • Another thing that has given me issues in the past is trying to use the OPTION_CLOSEST flag; it seems that some device implementations are unable to read anything but sync frames out of encoded media types. Perhaps try a different flag that specifically only reads back the sync frames from the content.


  • So in other words, try modifying the frame request like so:

    11 years ago
    Actually, support for the new Maps API is determined by the version of Google Play the user has on their device, via the new Google Play Services library. Devices as old as v2.2 support the ability to get Google Play Services updates, so as long as the device has that minimum version of Android and supports the other hardware requirements (OpenGL) it can work with the latest maps API.

    One of the examples in the book illustrates using a custom ItemizedOverlay subclass called LocationOverlay to display a list of small image icons on top of the map at particular locations. Here is a snippet of code from the example:



    The source for LocationOverlay and how to add it to a visible map is in the full example.
    11 years ago
    Hello Amit -

    Yes, that Android Google Maps SDK does allow you to place content on top of the map itself. The term the SDK uses for this is Overlays. The book does contain several recipes for dealing with the Maps SDK, including adding Overlay items for specific location points as well as for the user's current location. Unfortunately, however it deals only with v1 of the Maps SDK (which has been around since ~1.5).

    Google released shortly after the book went to publishing (thanks Google...) Maps API v2 (https://developers.google.com/maps/documentation/android/) which is not referenced directly in the book given its release date. While this API is not exactly the same, it is built on the same concepts. In the newer API, overlay data points are referred to as Markers. Maps v2 has some higher device requirements than v1 (OpenGL ES2, for example) so you may find yourself still needing to use v1 for the time being, but if you can jump to v2 the Google documentation will be the best resource for now.
    11 years ago
    Hi Stuie -

    The biggest thing that I tell most Java developers they will need to deal with when coming into Android is figuring out which parts of the SE/EE framework they are used to using are in Android and which parts aren't. The core Java APIs that you are most likely already familiar with in Android represent the implementation from the Apache Harmony project, so not everything will be available. I would suggest perusing the class documentation on http://developer.android.com in the java.* packages to get a feel for what parts of the API are common. There are also a few APIs that have been replaced but are similar; for instance the android.graphics package has a VERY similar API to the 2D Canvas drawing available in Oracle Java, but is more Android specific in nature.

    The second thing I tell all desktop/server Java developers is to remember that this is a mobile device. While the hardware capabilities of these devices are quite impressive, they are still nowhere near what you might be used to developing for. Memory and CPU time are at a premium, so being judicious with the amount of work you want to do at any given time, and reusing existing objects versus creating new ones whenever possible are two great things to keep in mind as you begin to develop for mobile.

    Now, as to your question on concurrency. The Android APIs do include the java.util.concurrent package, so all the classes pure Java has available for thread synchronization (ie. queues, locks, and tasks) can be used in Android applications. In fact, AsyncTask, one of the most widely used Android APIs for doing work on a background thread, is built on top of Executor and FutureTask from this package. You also have access to the full Thread API in your applications. Be wary, though, of what I mentioned earlier...this is still a mobile device. You be very judicious with the amount of parallelization you introduce...10 concurrent threads end up being of no use on a single core machine sitting at 100% usage. Most often, it isn't recommended that you create more than 2N threads in your application (where N is the number of device cores); I usually stick to 2-3 background threads max as a rule of thumb.

    I would recommend starting your search with AsyncTask, which allows you to load up tasks serially or using thread pools on the device. In the thread pool case, the ThreadPoolExecutor is provided by the framework, so you can be fairly confident that it won't allow more threads than the system can rightfully handle.

    Welcome to the world of Android development!
    11 years ago

    (a) A very good NFC chip maker?



    One of the largest manufacturer of NFC devices is NXP Semiconductor. They have a variety of different tags that can be purchased from commodity electronics vendors. Here is a link to their NFC site: http://www.nxp.com/campaigns/nfc/

    (b) If I get one, then how do I program it?



    NXP also has an Android app on Google Play that allows you to use your Android device to program compatible NFC tags with data from your device. I would recommend you use it as a starting point: https://play.google.com/store/apps/details?id=com.nxp.nfc.tagwriter&hl=en
    11 years ago