Java – Et-Setera https://www.setera.org Ramblings of a geek Fri, 15 Nov 2013 00:56:48 +0000 en-US hourly 1 https://wordpress.org/?v=4.5.24 Still Deadlocked https://www.setera.org/2012/03/11/still-deadlocked/ Sun, 11 Mar 2012 21:19:16 +0000 https://www.setera.org/?p=649 Wow… three months since my last post about my Android Clock Widget Project. While I’ve failed to bring stability to the clock selector during that time, I have figured out that the problem is not actually due to a deadlock. Instead, it appears that my project is tickling a bug in the Dalvik VM’s garbage collector.


Depending on the device and operating system level, there are subtle changes in behavior. In most cases, there is a crash log written to the /data/tombstones folder. The most revealing tombstone file has come from a Samsung Captivate running a version of the AOKP ICS ROM.

Build fingerprint: 'samsung/SGH-I897/SGH-I897:2.3.5/GINGERBREAD/UCKK4:user/release-keys'
pid: 1758, tid: 1777  >>> com.seterasoft.mclock <<<
signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr deadbaad
 r0 deadbaad  r1 00000001  r2 40000000  r3 00000000
 r4 00000000  r5 00000027  r6 50c53b40  r7 00000064
 r8 41338018  r9 00000024  10 50c53a6c  fp 50c53ab0
 ip ffffffff  sp 50c53a30  lr 400fdf79  pc 400fa694  cpsr 60000030
 d0  0000000000000000  d1  0000000000000000
 d2  0000000000000000  d3  0000000000000000
 d4  0000000000000000  d5  0000000000000000
 d6  0000000000000000  d7  0000000000000000
 d8  0000000000000000  d9  0000000000000000
 d10 0000000000000000  d11 0000000000000000
 d12 0000000000000000  d13 0000000000000000
 d14 0000000000000000  d15 0000000000000000
 d16 0000000000000000  d17 0000000000000000
 d18 0000000000000000  d19 0000000000000000
 d20 0000000000000000  d21 0000000000000000
 d22 0000000000000000  d23 0000000000000000
 d24 0000000000000000  d25 0000000000000000
 d26 0000000000000000  d27 0000000000000000
 d28 0100010001000100  d29 0100010001000100
 d30 0000000000000000  d31 3ff0000000000000
 scr 2800001b

         #00  pc 00017694  /system/lib/libc.so
         #01  pc 00007bb0  /system/lib/libcutils.so (mspace_merge_objects)
         #02  pc 0007b6c8  /system/lib/libdvm.so (_Z21dvmHeapSourceFreeListjPPv)
         #03  pc 00042ce0  /system/lib/libdvm.so
         #04  pc 00032f94  /system/lib/libdvm.so (_Z22dvmHeapBitmapSweepWalkPK10HeapBitmapS1_jjPFvjPPvS2_ES2_)
         #05  pc 00042c9c  /system/lib/libdvm.so (_Z27dvmHeapSweepUnmarkedObjectsbbPjS_)
         #06  pc 000337c0  /system/lib/libdvm.so (_Z25dvmCollectGarbageInternalPK6GcSpec)
         #07  pc 0005ff6c  /system/lib/libdvm.so (_Z17dvmCollectGarbagev)
         #08  pc 00072a8e  /system/lib/libdvm.so
         #09  pc 00030a8c  /system/lib/libdvm.so
         #10  pc 00034248  /system/lib/libdvm.so (_Z12dvmInterpretP6ThreadPK6MethodP6JValue)
         #11  pc 0006c692  /system/lib/libdvm.so (_Z14dvmCallMethodVP6ThreadPK6MethodP6ObjectbP6JValueSt9__va_list)
         #12  pc 0006c6b4  /system/lib/libdvm.so (_Z13dvmCallMethodP6ThreadPK6MethodP6ObjectP6JValuez)
         #13  pc 0005f7c0  /system/lib/libdvm.so
         #14  pc 00012c14  /system/lib/libc.so (__thread_entry)
         #15  pc 00012744  /system/lib/libc.so (pthread_create)

The failing function appears to be implemented in dlmalloc.c in the Android source, but I really don't have any good idea about what might be causing the crash. I also don't appear to be the only one, as there are other references on the web that look similar.

To this point, I've tried a variety of things to try to track down the problem. I've gone so far as to try to build my own version of the Cyanogenmod with the idea that I might be able to add more logging output. So far, I've not had any luck with this approach. I generally have no problem walking away from hobby projects when I lose interest. However, this has turned into a competition of me versus the computer and I'm not quite ready to give up.

]]>
Inertia https://www.setera.org/2011/07/30/inertia/ https://www.setera.org/2011/07/30/inertia/#comments Sun, 31 Jul 2011 00:23:30 +0000 https://www.setera.org/?p=541

Inertia is the resistance of any physical object to a change in its state of motion or rest, or the tendency of an object to resist any change in its motion.

For me, this also describes my tendencies toward side projects like my Pingus project. When I last worked on Pingus a couple of months ago, I updated the underlying AndEngine libraries and found a ton of breaking changes. I put Pingus on the shelf until I had more time to look at the breakage and how to solve it. The AndEngine changes are pretty significant and I’m going to need to rethink portions of Pingus in order to get things running correctly again.

Now my personal inertia is kicking in and causing me to put off this rework for a while. To me, this is the biggest difference between work and hobby projects… I don’t have to work on hobby projects if I don’t want to. Thus, Pingus is “on a break” for a while until I find the energy to bring it up to date relative to the underlying game engine.

In the meantime, I decided I wanted to spend a bit of time taking a look at Android’s App Widget support. Until I started digging into the documentation and examples, I had always assumed that a widget was provided a Canvas to draw directly on to the home screen. To me this seemed like it would have been the easiest way for developers to develop widgets.

It turns out that Android AppWidgets don’t work that way at all. AppWidgets are built on top of Remote Views. According to the Android documentation, Remote Views are

A class that describes a view hierarchy that can be displayed in another process. The hierarchy is inflated from a layout resource file, and this class provides some basic operations for modifying the content of the inflated hierarchy.

A Remote View is created in one process and passed into the process that owns the Android home screen. It is actually a Parcelable object, however due to class loading issues, there is only a very small number of Views and Widgets that are allowed to be passed across the process boundary. For anything that involves relatively complex graphical rendering, the only real way to drive the widget’s contents is by specifying a very simple widget layout:


    
    
	    

and then sending bitmaps to the image view:

	private void updateWidget(Context context, AppWidgetManager appWidgetManager) {
		Bitmap bitmap = Bitmap.createBitmap(100, 100, Config.ARGB_8888);
		Canvas canvas = new Canvas(bitmap);
		
		this.drawable.draw(canvas);

		RemoteViews views = new RemoteViews(context.getPackageName(), R.layout.main);
		views.setImageViewBitmap(R.id.imageview, bitmap);

		ComponentName componentName = new ComponentName(context, MyAppWidgetProvider.class);
		appWidgetManager.updateAppWidget(componentName, views);
	}

While this seems like a high overhead way to handle updates to the widget contents, I have to assume that the Android developers had a good reason for doing things like this. I can only hope that there are some tricks being done in the Android implementation that lower the cost of this operation. Given that I’ve just managed to get this to work at all, I imagine there are is a lot of room for improvement in my use of this API. However, I found it confusing enough to figure out how to do and thought others might benefit from my pain.

]]>
https://www.setera.org/2011/07/30/inertia/feed/ 2
Supporting Extra Large Screens in Android https://www.setera.org/2011/05/11/supporting-extra-large-screens-in-android/ Thu, 12 May 2011 00:00:20 +0000 https://www.setera.org/?p=500 In my last Android Pingus post I mentioned that I was interested in getting Pingus running full screen on my Motorola Xoom. It was clear from Android Market applications that it was possible to run applications across a wide range of Android versions with full screen support for extra large screens, but it was not entirely obvious to me how to actually accomplish that.

In reading the Android supports-screen documentation, it is clear that it is necessary to set the xlargeScreens attribute to true. However, the xlargeScreens attribute is not supported below API level 9. Trying to shoehorn that attribute into my project that was attempting to support back to API level 5, resulted in the following error.

XLargeScreens Attribute Failure

With a bit of finagling, I was able to get things working. In order to allow the xlargeScreens attribute, it is necessary to specify a target SDK version of at least 9.

XLargeScreens Working

This screenshot shows how the minimum SDK version can be set below version 9 and the target version is set to 9, allowing the xlargeScreens attribute to be specified. In addition, it is necessary to change the Android version level in the project properties.

XLargeScreens Properties

With the project properties set to use API level 9 there does not appear to be any automated way to restrict access to API that was older than version 9. Because of this, I do worry about choosing Android API’s that will not work at the minimum SDK and will fail on-device. My plan at this point is to switch back to building primarily for low end and switch once in a while to try on my Xoom. If I were a bit more serious, I would probably handle this automatically as part of a build script.

I do wish that Google had handled things differently in regard to how this works.

]]>
Pingus on Android – “Destroyable Terrain” #2 https://www.setera.org/2011/03/27/pingus-on-android-%e2%80%93-%e2%80%9cdestroyable-terrain%e2%80%9d-2/ Sun, 27 Mar 2011 21:32:08 +0000 https://www.setera.org/?p=431 When we last met I had begun working on the ability for the Pingus character to destroy the terrain. At that point, I had managed to get the images updated for the sprites that made up the image, but since those images were shared all of the sprites that shared the image were being affected.

I added support to separate the sprite images when a sprite needed to be altered, but it took me a while to realize I was forgetting to set the correct position for the newly created sprite. This resulted in this confusing result.

Once I realized that the issue was due to incorrect image/sprite generation, I had a much better result.

It became clear very quickly that my current approach for ground sprites was not going to work very well. Using the same images scaled and rotated in various ways makes it very difficult to find the correct sprite and, as you can see from the red X’s, it also means that many sprite images may need to be altered in the course of digging out a particular chunk of terrain.

Ground Tiles

To improve this such that the sprites would be more aligned and easier to deal with, I’m switching to using pre-generated ground tiles. The tooling that currently generates the collision map has been extended to generate a set of square ground tiles. Starting with an image that contains all of the ground sprites:

Full Ground Image

This image can be cropped down to include only the non-transparent area:

Cropped Ground Image

Finally, it is broken down into individual tiles:

Ground Tiles

Each non-transparent tile is stored individually. A new ground tile map object tracks the images and transparent tiles. At the moment, the tiles are being generated as 128×128 pixel images, which plays well with the OpenGL requirement that textures must be sized as a power of 2. Dependent on the maximum texture size, multiple ground tiles may be laid out within the texture with a minimum wasted space. The trick will be to pick an appropriate size to balance the various costs involved in loading and manipulating the sprite textures when destruction occurs.

While I had hoped to actually show this work via video in this post, I’ve run up against a bit of a roadblock. While fixing one problem, I’ve introduced another issue that I can’t seem to resolve. At this point it is better for me to walk away from this project for a few days and come back with a fresh set of eyes. With any hope, my next entry will show a final working destroyable terrain implementation.

]]>
Pingus on Android – “Destroyable Terrain” #1 https://www.setera.org/2011/03/05/pingus-on-android-destroyable-terrain-1/ Sun, 06 Mar 2011 00:58:10 +0000 https://www.setera.org/?p=409 They say that slow and steady wins the race. In the case of this project, the only thing I have going for me is the slow part. Nicolas Gramlich, author of the AndEngine library on which this is based, referred to this part of the project as “destroyable terrain”. I really like that phrase, so I think I will continue to use it here.

In Early Digger Support I covered the initial digger support. At that point I had managed to update the in-memory collision map, but updating the actual textures driving AndEngine was proving to be a bit more difficult. I’m still not there, but I think I’m moving in a positive direction. The following video shows the current state of things. The textures are being updated with a full red fill to make it clear that they have been hit.

So, why is everything turning red? Well, that turns out to be the next item that will need to be dealt with… shared textures. To save memory, many of the sprites share common textures and texture regions. Thus, in the current implementation, changing the underlying texture information affects all sprites that share that information. This is something I knew would have to be dealt with eventually, so it appears that eventually is now.

Quad Trees

When I initially started playing with altering the texture data, I was worried about performance. My first attempt to locate the sprites to be altered used the standard AndEngine functionality to query collisions using the “collidesWith” method for shapes. This proved to be really expensive for gross-level collision detection. My performance tests using the built in Android tools for capturing trace data showed that much of the cost of the terrain destruction was accountable to simply finding the sprite to be altered.

I had heard previously about the use of Octrees in 3D to help do quick searches on the boundaries of objects. In the 2D world, Quadtrees are used instead. I was surprised not to find an actual Quadtree implementation on the web, but was able to piece together a nice generic implementation based on lots of research. With the Quadtree, I was able to get closer to reasonable performance, as you can see in the video capture. My hope is that using a Quadtree and doing the necessary cloning to split sprite textures will lead to a reasonably performant destroyable terrain implementation, but that is still yet to be seen.

Java Generics Aside

My Quadtree implementation initially was built to accept a single object type. It seemed more useful to use Java Generics to make the Quadtree more generally useful. I was hung up by one thing though. I wanted to be able to allow the Quadtree to accept objects with a certain interface declaration. Basically, I wanted this:

public class Quadtree<T implements IBoundedObject> {

Where IBoundedObject is simply defined as:

public interface IBoundedObject {
	Rect getBounds();
}

However, the implements extension is not supported by the generics syntax. This had me confused for a while until I realized that it is possible to do what I wanted to do, but needed to specify extends:

public class Quadtree<T extends IBoundedObject> {

I’m sure there is some perfectly good technical reason for doing things this way, but personally I find the lack of consistency confusing and unnecessary.

Next Time

With any luck, I will be able to show a reasonably performant implementation of destroyable terrain by pulling the various pieces together.

]]>
Pingus on Android – Early Digger Support https://www.setera.org/2011/02/19/pingus-on-android-early-digger-support/ Sat, 19 Feb 2011 20:28:42 +0000 https://www.setera.org/?p=401 Work and life have conspired to keep me from making a lot of progress on my Android on Pingus project. I had hoped to get further before posting here again, but instead decided to go ahead and post a minor update. In my last post I covered my early collision detection implementation.

The next step was to start implementing some behaviors for the Pingus. The digger behavior seemed a good place to start. In order to implement the digger, it is necessary to actually alter the collision map generated by the tool. In the end, this part was pretty easy to handle. The results are captured in the video capture.

While it was relatively easy to carve out a path through the in-memory collision map, updating the actual graphics is proving to be much more difficult. AndEngine implements 2D graphics using 3D/OpenGL. This implies that in order to update the graphics, the underlying texture images need to be updated. I’m in the process of building AndEngine support for altering the underlying texture images. At the moment, this appears to be slow and may need to be abandoned. Just like while I worked on the collision map, the lack of guarantee for clipping and Z buffer on Android devices further complicate the situation.

While there are times that I wonder if using AndEngine for this project is makes it more difficult, I’m not quite ready to give it up. More to come…

]]>
Pingus on Android – More Collision Detection https://www.setera.org/2011/01/16/pingus-on-android-more-collision-detection/ https://www.setera.org/2011/01/16/pingus-on-android-more-collision-detection/#comments Sun, 16 Jan 2011 14:55:19 +0000 https://www.setera.org/?p=342 It is a good thing that I’m not trying to make my living with this little project, given the slow forward progress. However, there is continued progress on the collision detection compared to my last update Pingus On Android – Early Collision Detection.

As you can see from this video, things are still a bit twitchy, but at least Pingus is able to step up and down. To get to this point, I considered a couple of options for improving the collision detection in the system.

  • Use the OpenGL stencil buffer
    While this might be an interesting approach, the stencil buffer is not guaranteed to be available on all devices.
  • glReadPixels at collision point
    Using glReadPixels “around” the point of a potential collision might be possible, but appeared to be a fairly expensive operation. In addition, it would be difficult to determine whether a pixel was “ground or “solid”.
  • glReadPixels to build a full collision map during startup
    This approach would be an improvement over continually using glReadPixels, by caching the results, but suffers from many of the same problems. In addition, the cached image would be large if 4-byte pixels were used.

Pre-generated Collision Map

In the end, I decided that the best approach was to use an external tool to generate a collision map. Unlike many other games, the Pingus world is fairly static, allowing the pre-generation of the map. It is clear that some future aspects of the gameplay will require more dynamic collision detection, but pre-generating this much of the collision map offered a lot of positives:

  • Allowed the hard work of calculating the collision map to be moved outside of the constrained mobile environment.
  • Allowed the collision map output and associated object wrapper to be tested outside of the constrained mobile environment.
  • Allowed the collision map data to be heavily processed to provide the smallest usable map.

Initial Map Image

The first step in the generation of the collision map is to create an image representing the world objects. This initial image is generated using the Java image API’s in the RGB colorspace using the full color sprite images. Between the bit-depth and the excess transparent space in the image, this image is much larger than needed for the collision map. The following image (scaled down), demonstrates the wasted space.

Indexed Image

In an attempt to reduce the size of the individual pixels, the image was converted to a indexed color model. However, the Java image API’s will always attempt to match the closest color, yielding a collision map that looks like the following. Just not quite what we need.

Indexed Image Corrected

Instead of drawing the sprite images directly into the indexed collision map, it is necessary to first convert the sprite images, marking opaque versus transparent images before drawing the collision map. In the following image, the colors have been mapped to mean:

  • cyan – Transparent
  • blue – Liquid
  • green – Ground
  • red – Solid (not shown)

The original PNG that this scaled version originated from is 1400×700 pixels and is 3.5K on disk (with compression, etc). Uncompressed, it is a fairly large image to deal with in memory.

It turns out that dealing with alpha values in the Java image library is somewhat tricky. The way alpha is dealt with depends on the underlying color model that is being used. To avoid having to always check during the conversion of the RGB sprite images into the opaque/transparent image, the following class helped.

	class ColorMapTransparencyHelper {
		private ColorModel colorModel;
		private boolean hasTranparentPixel;
		private int transparentPixelRGB;
		
		ColorMapTransparencyHelper(ColorModel colorModel) {
			this.colorModel = colorModel;
			
			if (colorModel instanceof IndexColorModel) {
				IndexColorModel indexColorModel = (IndexColorModel) colorModel;
				
				int transparentPixel = indexColorModel.getTransparentPixel();
				if (transparentPixel != -1) {
					hasTranparentPixel = true;
					transparentPixelRGB = indexColorModel.getRGB(transparentPixel);
				}
			}
		}
		
		boolean hasAlpha(int rgb) {
			return hasTranparentPixel ? 
				(rgb == transparentPixelRGB) : 
				(colorModel.getAlpha(rgb) != 0);
		}
	}

Crop and Corrected Indexed Image

The final step was to eliminate as much transparency as possible. The following cropped image is the final image result. The PNG image in this case is 1400×440 pixels and compressed to 3.1K.

Moving Beyond the Image

Originally, I had thought I would use a packaged PNG image as the basis for the collision map on the device. While this might have worked out, the biggest problem was that the Android graphics API does not make it easy to get the index of the pixel rather than the RGB value. The multiple conversions required to deal with the image as a collision map ended up being more heavyweight than it seemed worthwhile. Thus, the final step the tool takes is to convert the PNG image into an array of bytes representing the states of the pixels. These bytes are written to the package and read by the device as the collision map.

The model class that wraps this data is aware of the transparent regions that are not part of the collision map values and those are automatically taken care of by the model class. This yields a very simple API for callers:

public class CollisionMap {
	public static byte PIXEL_TRANSPARENT = 0;
	public static byte PIXEL_SOLID = 1;
	public static byte PIXEL_GROUND = 2;
	public static byte PIXEL_WATER = 3;

	/**
	 * Get the collision value at the specified location.
	 * 
	 * @param x
	 * @param y
	 * @return
	 */
	public byte getCollisionValue(int x, int y);

	/**
	 * Get an array of bytes representing the map values for the specified
	 * column.
	 * 
	 * @param x_start
	 * @param y
	 * @param width
	 * @return
	 */
	public byte[] getHorizontalCollisionEdge(int x_start, int y, int width);

	/**
	 * Get an array of bytes representing the map values for the specified row.
	 * 
	 * @param x
	 * @param y_start
	 * @param height
	 * @return
	 */
	public byte[] getVerticalCollisionEdge(int x, int y_start, int height);

	/**
	 * Set the specified collision map value.
	 * 
	 * @param x
	 * @param y
	 * @param value
	 */
	public void setCollisionValue(int x, int y, byte value);
}

With the collision map in place, it is then possible to query for information about the world around the Pingus. This information will be further useful in implementing things like the visual world map and determining whether a Pingus can dig at a particular location.

To be continued…

]]>
https://www.setera.org/2011/01/16/pingus-on-android-more-collision-detection/feed/ 2
Pingus on Android https://www.setera.org/2010/11/21/pingus-on-android/ https://www.setera.org/2010/11/21/pingus-on-android/#comments Mon, 22 Nov 2010 01:56:18 +0000 https://www.setera.org/?p=272 As I’ve continued hiring at mFoundry (if you live in the Bay Area, check us out), I’ve been very busy non-coding.  As usual, that implies the need for a non-work programming project.  As I mentioned in my last post, I’ve started digging into Android programming.  I decided it would be interesting to try to do a game of some sort.  Given that I have zero skill with graphics, I had to cheat a bit.  I’m attempting to build an Android version of the Pingus game using the graphics and levels from their source code and the very cool Android game engine AndEngine.

The most straightforward approach to bringing Pingus to Android would probably be to do a native C port using the Android NDK, however I’m more interested in trying to build out the game logic.  I don’t foresee myself being nearly as good about documenting the process as Shamus Young at Twenty Sided, but to prove I’m still alive and coding it seemed a good time to post something.

AndEngine

AndEngine is an excellent open-source library for building 2D games using Android’s OpenGL ES support for improved function and performance.   AndEngine has an excellent set of examples that can be installed directly from the Android Market.  The examples do a decent job of showing how to use the library.  I’m slowly finding my way around the library, however real documentation would be very helpful in truly understanding the library.  With that said, I can’t complain too much about an excellent library that is completely free.

Parsing Resource Definitions

I had originally planned to package the Pingus level and resource definitions into the package, reading them at runtime.  The Pingus level and resource definition files are defined using a subset of Lisp S-Expressions.  While running under the emulator, it became clear that reading these files at runtime was going to be too expensive.  After a couple of iterations, the resource definitions are currently read by a separate tool into a set of representative model objects.  Those objects are then serialized into SQLite database packaged into the Android package.  Even after moving to this model, it became necessary to take control over the serialize and deserialize logic to improve performances.

I should note that performance was fine without all of these tweaks on my Captivate, however I felt that the performance work would definitely be of benefit no matter what device was used.

Base Graphics

After getting the model object loading straightened out, I moved on to building the basic level graphics using AndEngine Textures, Texture Regions and Sprites.  This was something I did not expect to be incredibly difficult, however I’m finding that not to be the case.  Pingus reuses a number of images, with modifiers like rotate 90, rotate 90 flip and rotate 180.

Flipping Images

In digging around the internet, all examples of a horizontal “flip” operation suggest something like the following:

sprite.setScale(-1, 1); // Horizontal flip

I tried various combinations with a negative scale factor, all of which resulted in the sprite disappearing. Finally, I stumbled on to the answer in the AndEngine forums, using the texture region rather than the sprite.

sprite.getTextureRegion().setFlippedHorizontal(true);

90 Degree Rotation

90 Degree (and presumably 270 degree) rotations are proving difficult to get right. I’ve tried a couple of options to get this right. If I rotate 90 degrees with a rotation centered at (0,0), I end up with something offset primarily in the negative X direction.

While I can compensate in this case using a hardcoded offset:

setPosition(position.x + 50, position.y);

I have no idea why this value works or how it may be tied to any of the image bounds. I’ve also tried rotating around the center of the image resulting in similarly bizarre results. When rotating around the image center, the offsets to get things in place were similarly questionable:

setPosition(position.x - 125, position.y + 125);

Until I can find the correct calculation that properly places the 90 degree rotations, I’m kind of stuck. Even with the hardcoded offsets, I know I’m not quite in the right spot, although it appears to be pretty close:

In order to get a better idea of where the actual problem images are located, I hacked up the troublesome image a bit, adding an ugly white border and a black spot in the upper-left corner. With this in place, it is at least clear where this image is located relative to all of the other images:

With the outline, it is possible to pick out the specific image, but it does not give any further insights into the calculations to get those images in place.

What’s Next?

After spending a considerable amount of time trying to figure out the rotation offsets, it is probably a good time to step back for a bit and look elsewhere. Hopefully coming back to this problem after some down time, an explanation will reveal itself. In the meantime, adding the ability to zoom (multi-touch!) and pan within the level seems like a interesting next project that will give me a chance to dig further into the AndEngine support. In addition, it may also be useful in helping determine the correct location for the rotated items.

]]>
https://www.setera.org/2010/11/21/pingus-on-android/feed/ 2
More iPhone Versus Java Differences https://www.setera.org/2010/08/15/more-iphone-versus-java-differences/ Sun, 15 Aug 2010 18:36:59 +0000 https://www.setera.org/?p=264 In my previous entries, I’ve discussed a few things that caught me off guard while learning iPhone development.  In the last couple of weeks, I’ve picked up an Android device to dig into that platform a bit and probably will spend less time playing with iPhone development.  Before I move too far away from iPhone, I wanted to wrap up the remaining differences I found interesting between the iPhone and Java platforms.

Run Loop Required for Networking

One of the earliest things I needed to do was build out the networking code for my Daap player.  Initially, I was building this code as a standard Macintosh command-line application.  I happily wrote code to set up a synchronous networking call using NSURL and NSURLConnection and then… nothing.  Unlike Java, it was necessary to have a “run loop” executing.  Had I done this test initially on the iPhone emulator, I would have never run across this since the iPhone has a run loop executing the application.

Subclassing and Class Clusters

In Java, it is not possible to add functionality to another class.  The only real available option is to subclass the class of interest.  In general, that works ok until you get a class like java.lang.String that isn’t mean to be subclassed, in which case you need to provide some kind of wrapper or utility class.  My first attempt at adding some new functionality to NSMutableDictionary from the Foundation library was using a subclass.  I was greeted at runtime by an error similar to:

2010-08-14 09:55:48.965 TestProject[1136:207] *** Terminating app due to uncaught exception ‘NSInvalidArgumentException’, reason: ‘*** -[NSDictionary objectForKey:]: method only defined for abstract class.  Define -[MyDictionarySubclass objectForKey:]!’

What the heck?  It turns out that most of the collection classes in iPhone are implemented as class clusters.  According to the Cocoa Fundamentals Guide class clusters

… group a number of private, concrete subclasses under a public, abstract superclass. The grouping of classes in this way simplifies the publicly visible architecture of an object-oriented framework without reducing its functional richness.

Had I really needed to subclass NSMutableDictionary, Matt Gallagher talks about how to create such a subclass.  In my case, it turns out what I really needed was just an Objective-C category to add methods to NSMutableDictionary directly rather than subclassing the class.  Categories remind me of similar functionality available in Smalltalk, allowing additional methods to be attached to classes.  The class “shape” (instance variables) cannot be changed using categories, but new methods can be added which is very helpful for creating utility methods on a specific class rather than having to do it on a separate class.  Looking around the documentation for the various frameworks in the system, it is amazing to see how many classes are extended using categories.

Summary

Although my Daap player is nowhere near complete, the project did offer me plenty of visibility into Objective-C and iPhone development.  Objective-C and, in particular, the various libraries for iPhone development are incredibly powerful.  While there were a few growing pains along the way, the transition to doing iPhone development was relatively straightforward and enjoyable.

]]>
iPhone Versus Exceptions https://www.setera.org/2010/04/25/iphone-versus-exceptions/ Mon, 26 Apr 2010 00:55:15 +0000 https://www.setera.org/?p=249 I’m continuing to make slow forward progress with my DAAP-based music player for the iPhone.  My most recent changes have taken this in the direction of being much more like the standard music player functionality on the iPhone.  In particular, I’ve switched over to using a tab view controller for the major perspectives of viewing the music database.

Tab Based Main View

In addition, there is now a (very) rudimentary Now Playing screen to control playback.

Now Playing View

Now Playing View

iPhone Versus Exceptions

In my last entry, I mentioned that I’ve struggled through some interesting differences when dealing with iPhone development when compared to my years of experience in Java.  As a long time Java developer, I’m very accustomed to the use of checked exceptions.   Most, if not all, error handling in Java is handled through the creation, throwing and catching of exceptions.  I’m accustomed to catching/handling exceptions from the underlying libraries as well as creating and throwing my own exceptions.  It was with that background that I approached iPhone development and quickly found out that is not the recommended way of handling error conditions.  While, the standard try/catch functionality is supported in Objective-C, the documentation for Cocoa development makes it clear that using exceptions should be avoided:

Important: You should reserve the use of exceptions for programming or unexpected runtime errors such as out-of-bounds collection access, attempts to mutate immutable objects, sending an invalid message, and losing the connection to the window server. You usually take care of these sorts of errors with exceptions when an application is being created rather than at runtime.

Instead of exceptions, error objects (NSError) and the Cocoa error-delivery mechanism are the recommended way to communicate expected errors in Cocoa applications.

This is an important difference to understand when transitioning to Cocoa development from Java development.  While this is important to understand when making calls to library methods and functions, it must also be considered when defining your own calling conventions and libraries.  In order to remain consistent, it is important to use the pattern of NSError usage.

]]>