package
The UI API provides a set of features for implementation of user
interfaces for MIDP applications. User Interface
The main criteria for the MIDP have been drafted with mobile
information devices in mind (i.e., mobile phones and pagers).
These devices differ from desktop systems in many ways, especially
how the user interacts with them. The following UI-related
requirements are important when designing the user interface
API:
-
The devices and applications should be useful to users who are
not necessarily experts in using computers.
-
The devices and applications should be useful in situations
where the user cannot pay full attention to the application.
For example, many phone-type devices will be operated with one
hand.
-
The form factors and UI concepts of the device differ between
devices, especially from desktop systems. For example, the
display sizes are smaller, and the input devices do not always
include pointing devices.
-
The applications run on MIDs should have UIs that are compatible
to the native applications so that the user finds them easy to
use.
Given the capabilities of devices that will implement the MIDP and
the above requirements, the MIDPEG decided not to simply subset
the existing Java UI, which is the Abstract Windowing Toolkit
(AWT). Reasons for this decision include:
-
Although AWT was designed for desktop computers and optimized to
these devices, it also suffers from assumptions based on this
heritage.
-
When a user interacts with AWT, event objects are created
dynamically. These objects are short-lived and exist only until
each associated event is processed by the system. At this point,
the event object becomes garbage and must be reclaimed by the
system's garbage collector. The limited CPU and memory
subsystems of a MID typically cannot handle this behavior.
-
AWT has a rich but desktop-based feature set. This feature set
includes support for features not found on MIDs. For example,
AWT has extensive support for window management (e.g.,
overlapping windows, window resize, etc.). MIDs have small
displays which are not large enough to display multiple
overlapping windows. The limited display size also makes
resizing a window impractical. As such, the windowing and layout
manager support within AWT is not required for MIDs.
-
AWT assumes certain user interaction models. The component set
of AWT was designed to work with a pointer device (e.g., a mouse
or pen input). As mentioned earlier, this assumption is valid
for only a small subset of MIDs since many of these devices have
only a keypad for user input.
Structure of the MIDP UI API
The MIDP UI is logically composed of two APIs: the high-level and the
low-level.
The high-level API is designed for business applications whose client
parts run on MIDs. For these applications, portability across devices
is important. To achieve this portability, the high-level API employs a
high level of abstraction and provides very little control over look
and feel. This abstraction is further manifested in the following
ways:
-
The actual drawing to the MID's display is performed by the
implementation. Applications do not define the visual appearance
(e.g., shape, color, font, etc.) of the components.
-
Navigation, scrolling, and other primitive interaction is
encapsulated by the implementation, and the application is not
aware of these interactions.
-
Applications cannot access concrete input devices like specific
individual keys.
In other words, when using the high-level API, it is assumed that
the underlying implementation will do the necessary adaptation to
the device's hardware and native UI style. The classes that
provide the high-level API are the subclasses of
{@link javax.microedition.lcdui.Screen}.
The low-level API, on the other hand, provides very little
abstraction. This API is designed for applications that need
precise placement and control of graphic elements, as well as
access to low-level input events. Some applications also need to
access special, device-specific features. A typical example of
such an application would be a game.
Using the low-level API, an application can:
-
Have full control of what is drawn on the display.
-
Listen for primitive events like key presses and releases.
-
Access concrete keys and other input devices.
The classes that provide the low-level API are
{@link javax.microedition.lcdui.Canvas} and
{@link javax.microedition.lcdui.Graphics}.
Applications that program to the low-level API are not guaranteed
to be portable, since the low-level API provides the means to
access details that are specific to a particular device. If the
application does not use these features, it will be portable. It
is recommended that applications use only the platform-independent
part of the low-level API whenever possible. This means that the
applications should not directly assume the existence of any keys
other than those defined in the Canvas class, and
they should not depend on a specific screen size. Rather, the
application game-key event mapping mechanism should be used
instead of concrete keys, and the application should inquire about
the size of the display and adjust itself accordingly.
Class Hierarchy
The central abstraction of the MIDP's UI is a
Displayable object, which encapsulates
device-specific graphics rendering with user input. Only one
Displayable may be visible at a time, and and the
user can see and interact with only contents of that
Displayable .
The Screen class is a subclass of
Displayable that takes care of all user interaction
with high-level user interface component. The Screen
subclasses handle rendering, interaction, traversal, and
scrolling, with only higher-level events being passed on to the
application.
The rationale behind this design is based on the different display
and input solutions found in MIDP devices. These differences imply
that the component layout, scrolling, and focus traversal will be
implemented differently on different devices. If an application
were required to be aware of these issues, portability would be
compromised. Simple screenfuls also organize the user interface
into manageable pieces, resulting in user interfaces that are easy
to use and learn.
There are three categories of Displayable objects:
-
Screens that encapsulate a
complex user interface
component (e.g., classes
List or TextBox ).
The structure of these screens is predefined, and the application
cannot add other components to these screens.
-
Generic screens (instances of the
Form class) that
can contain Item objects to represent user
interface components. The application can populate
Form objects with an arbitrary number of text,
image, and other components; however, it is recommended that
Form objects be kept simple and that they should be
used to contain only a few, closely-related user interface
components.
-
Screens that are used in context of the low-level API
(i.e., subclasses of class
Canvas ).
Each Displayable can have
a title, a Ticker and a set of
Commands attached to it.
The class Display acts as the display manager that is
instantiated for each active MIDlet and provides
methods to retrieve information about the device's display
capabilities. A Displayable is made visible by
calling the setCurrent() method of
Display . When a Displayable is made
current, it replaces the previous Displayable .
Class Overview
It is anticipated that most applications will utilize screens with
predefined structures like
List
, TextBox
, and Alert
. These classes are used in the following ways:
-
List
is used when the user should select from a predefined set of
choices.
-
TextBox
is used when asking textual input.
-
Alert
is used to display temporary messages containing text and images.
A special class Form is defined for cases where
screens with a predefined structure are not sufficient. For
example, an application may have two TextFields , or a
TextField and a simple ChoiceGroup
. Although this class (Form ) allows creation of
arbitrary combinations of components, developers should keep the
limited display size in mind and create only simple
Forms .
Form is designed to contain a small number of closely
related UI elements. These elements are the subclasses of
Item : ImageItem ,
StringItem , TextField ,
ChoiceGroup , Gauge , and
CustomItem . The classes ImageItem and
StringItem are convenience classes that make certain
operations with Form and Alert
easier. By subclassing CustomItem application
developers can introduce Items with a new visual
representation and interactive elements. If the components do not
all fit on the screen, the implementation may either make the form
scrollable or implement some components so that they can either
popup in a new screen or expand when the user edits the
element.
Interplay with Application Manager
The user interface, like any other resource in the API, is to be
controlled according to the principle of MIDP application
management. The UI expects the following conditions from the
application management software:
-
getDisplay() is callable starting from
MIDlet 's constructor until
destroyApp() has returned.
-
The
Display object is the same until
destroyApp() is called.
-
The
Displayable object set by
setCurrent() is not changed by the application
manager.
The application manager assumes that the application behaves as
follows with respect to the MIDlet events:
-
startApp
- The application may call setCurrent()
for the first screen. The application manager makes
Displayable really visible when
startApp() returns. Note that startApp()
can be called several times if pauseApp() is called
in between. This means that initialization should not take
place, and the application should not accidentally switch to
another screen with setCurrent() .
-
pauseApp
- The application should release as many threads as
possible. Also, if starting with another screen when the
application is re-activated, the new screen should be set with
setCurrent() .
-
destroyApp
- The application may delete created objects.
Event Handling
User interaction causes events, and the implementation notifies
the application of the events by making corresponding
callbacks. There are four kinds of UI callbacks:
-
Abstract commands that are part of the high-level API
-
Low-level events that represent single key presses and releases
(and pointer events, if a pointer is available)
-
Calls to the
paint() method of a
Canvas class
-
Calls to a
Runnable object's run()
method requested by a call to callSerially() of
class Display
All UI callbacks are serialized, so they will never occur in
parallel. That is, the implementation will
never call an callback before a prior call to any
other callback has returned. This property enables applications
to be assured that processing of a previous user event will have
completed before the next event is delivered. If multiple UI
callbacks are pending, the next is called as soon as possible after
the previous UI callback returns. The implementation also
guarantees that the call to run() requested by a call
to callSerially() is made after any pending repaint
requests have been satisfied.
There is one exception to the callback serialization rule, which occurs
when the {@link javax.microedition.lcdui.Canvas#serviceRepaints
Canvas.serviceRepaints} method is called. This method causes
the the Canvas.paint method to be called and waits
for it to complete. This occurs even if the caller of
serviceRepaints is itself within an active callback.
There is further discussion of this issue
below.
The following callbacks are all serialized with respect to each other:
- {@link javax.microedition.lcdui.Canvas#hideNotify
Canvas.hideNotify}
- {@link javax.microedition.lcdui.Canvas#keyPressed
Canvas.keyPressed}
- {@link javax.microedition.lcdui.Canvas#keyRepeated
Canvas.keyRepeated}
- {@link javax.microedition.lcdui.Canvas#keyReleased
Canvas.keyReleased}
- {@link javax.microedition.lcdui.Canvas#paint
Canvas.paint}
- {@link javax.microedition.lcdui.Canvas#pointerDragged
Canvas.pointerDragged}
- {@link javax.microedition.lcdui.Canvas#pointerPressed
Canvas.pointerPressed}
- {@link javax.microedition.lcdui.Canvas#pointerReleased
Canvas.pointerReleased}
- {@link javax.microedition.lcdui.Canvas#showNotify
Canvas.showNotify}
- {@link javax.microedition.lcdui.Canvas#sizeChanged
Canvas.sizeChanged}
- {@link javax.microedition.lcdui.CommandListener#commandAction
CommandListener.commandAction}
- {@link javax.microedition.lcdui.CustomItem#getMinContentHeight
CustomItem.getMinContentHeight}
- {@link javax.microedition.lcdui.CustomItem#getMinContentWidth
CustomItem.getMinContentWidth}
- {@link javax.microedition.lcdui.CustomItem#getPrefContentHeight
CustomItem.getPrefContentHeight}
- {@link javax.microedition.lcdui.CustomItem#getPrefContentWidth
CustomItem.getPrefContentWidth}
- {@link javax.microedition.lcdui.CustomItem#hideNotify
CustomItem.hideNotify}
- {@link javax.microedition.lcdui.CustomItem#keyPressed
CustomItem.keyPressed}
- {@link javax.microedition.lcdui.CustomItem#keyRepeated
CustomItem.keyRepeated}
- {@link javax.microedition.lcdui.CustomItem#keyReleased
CustomItem.keyReleased}
- {@link javax.microedition.lcdui.CustomItem#paint
CustomItem.paint}
- {@link javax.microedition.lcdui.CustomItem#pointerDragged
CustomItem.pointerDragged}
- {@link javax.microedition.lcdui.CustomItem#pointerPressed
CustomItem.pointerPressed}
- {@link javax.microedition.lcdui.CustomItem#pointerReleased
CustomItem.pointerReleased}
- {@link javax.microedition.lcdui.CustomItem#showNotify
CustomItem.showNotify}
- {@link javax.microedition.lcdui.CustomItem#sizeChanged
CustomItem.sizeChanged}
- {@link javax.microedition.lcdui.CustomItem#traverse
CustomItem.traverse}
- {@link javax.microedition.lcdui.CustomItem#traverseOut
CustomItem.traverseOut}
- {@link javax.microedition.lcdui.Displayable#sizeChanged
Displayable.sizeChanged}
- {@link javax.microedition.lcdui.ItemCommandListener#commandAction
ItemCommandListener.commandAction}
- {@link javax.microedition.lcdui.ItemStateListener#itemStateChanged
ItemStateListener.itemStateChanged}
-
Runnable.run resulting from a call to
{@link javax.microedition.lcdui.Display#callSerially
Display.callSerially}
Note that {@link java.util.Timer Timer}
events are not considered UI events.
Timer callbacks may run concurrently with UI event
callbacks, although {@link java.util.TimerTask TimerTask}
callbacks scheduled on the same Timer are
serialized with each other.
Applications that use timers must guard their
data structures against concurrent access from timer threads
and UI event callbacks. Alternatively, applications may have
their timer callbacks use
{@link javax.microedition.lcdui.Display#callSerially Display.callSerially}
so that work triggered by timer events can be serialized with
the UI event callbacks.
Abstract Commands
Since MIDP UI is highly abstract, it does not dictate any concrete
user interaction technique like soft buttons or menus. Also,
low-level user interactions such as traversal or scrolling are not
visible to the application. MIDP applications define
Commands , and the implementation may manifest these
via either soft buttons, menus, or whatever mechanisms are
appropriate for that device.
Commands are installed to a Displayable
(Canvas or Screen ) with a method
addCommand of class Displayable .
The native style of the device may assume that certain types of
commands are placed on standard places. For example, the
"go-back" operation may always be mapped to the right
soft button. The Command class allows the application
to communicate such a semantic meaning to the implementation so
that these standard mappings can be effected.
The implementation does not actually implement any of the
semantics of the Command . The attributes of a
Command are used only for mapping it onto the user
interface. The actual semantics of a Command are
always implemented by the application in a
CommandListener .
Command objects have attributes:
- Label:
Shown to the user as a hint. A single
Command can
have two versions of labels: short and long. The implementation
decides whether the short or long version is appropriate for a
given situation. For example, an implementation can choose to
use a short version of a given Command near a soft
button and the long version of the Command in a
menu.
- Type:
The purpose of a command. The implementation will use the
command type for placing the command appropriately within the
device's user interface.
Commands with similar
types may, for example, be found near each other in certain
dedicated place in the user interface. Often, devices will have
policy for placement and presentation of certain operations.
For example, a "backward navigation" command might be
always placed on the right soft key on a particular device, but
it might be placed on the left soft key on a different device.
The Command class provides fixed set of command
types that provide MIDlet the capability to tell
the device implementation the intent of a Command .
The application can use the BACK command type for
commands that perform backward navigation. On the devices
mentioned above, this type information would be used to assign
the command to the appropriate soft key.
- Priority:
Defines the relative importance between
Commands of
the same type. A command with a lower priority value is more
important than a command of the same type but with a higher
priority value. If possible, a more important command is
presented before, or is more easily accessible, than a less
important one.
Device-Provided Operations
In many high-level UI classes there are also some additional
operations available in the user interface. The additional
operations are not visible to applications, only to the end-user.
The set of operations available depends totally on the user
interface design of the specific device. For example, an operation
that allows the user to change the mode for text input between
alphabetic and numeric is needed in devices that have only an
ITU-T keypad. More complex input systems will require additional
operations. Some of operations available are presented in the user
interface in the same way the application-defined commands are.
End-users need not understand which operations are provided by the
application and which provided by the system. Not all operations
are available in every implementation. For example, a system that
has a word-lookup-based text input scheme will generally provide
additional operations within the TextBox class. A
system that lacks such an input scheme will also lack the
corresponding operations.
Some operations are available on all devices, but the way the
operation is implemented may differ greatly from device to device.
Examples of this kind of operation are: the mechanism used to
navigate between List elements and Form
items, the selection of List elements, moving an
insertion position within a text editor, and so forth. Some
devices do not allow the direct editing of the value of an
Item , but instead require the user to switch to an
off-screen editor. In such devices, there must be a dedicated
selection operation that can be used to invoke the off-screen
editor. The selection of a List elements could be,
for example, implemented with a dedicated "Go" or
"Select" or some other similar key. Some devices have
no dedicated selection key and must select elements using some
other means.
On devices where the selection operation is performed using a
dedicated select key, this key will often not have a label
displayed for it. It is appropriate for the implementation to use
this key in situations where its meaning is obvious. For example,
if the user is presented with a set of mutually exclusive options,
the selection key will obviously select one of those options.
However, in a device that doesn't have a dedicated select key, it
is likely that the selection operation will be performed using a
soft key that requires a label. The ability to set the
select-command for a List of type
IMPLICIT and the ability to set the default command
for an Item are provided so that the application can
set the label for this operation and so it can receive
notification when this operation occurs.
High-Level API for Events
The handling of events in the high-level API is based on a
listener model. Screens and Canvases may
have listeners for commands. An object willing to be a listener
should implement an interface CommandListener that
has one method:
void commandAction(Command c, Displayable d);
|
The application gets these events if the Screen or
Canvas has attached Commands and if
there is a registered listener. A unicast-version of the listener
model is adopted, so the Screen or
Canvas can have one listener at a time.
There is also a listener interface for state changes of the
Items in a Form . The method
void itemStateChanged(Item item);
|
defined in interface ItemStateListener is called when
the value of an interactive Gauge ,
ChoiceGroup , or TextField changes. It
is not expected that the listener will be called after every
change. However, if the value of an Item has been changed, the
listener will be called for the change sometime before it is
called for another item or before a command is delivered to the
Form's CommandListener . It is suggested
that the change listener is called at least after focus (or
equivalent) is lost from field. The listener should only be called
if the field's value has actually changed.
Low-Level API for Events
Low-level graphics and events have the following methods to handle
low-level key events:
public void keyPressed(int keyCode);
public void keyReleased(int keyCode);
public void keyRepeated(int keyCode);
|
The last call, keyRepeated , is not necessarily
available in all devices. The applications can check the
availability of repeat actions by calling the following method of
the Canvas :
public static boolean hasRepeatEvents();
|
The API requires that there be standard key codes for the ITU-T keypad
(0-9, *, #), but no keypad layout is required by the API. Although an
implementation may provide additional keys, applications relying on
these keys are not portable.
In addition, the class Canvas has methods for
handling abstract game events. An implementation maps all these
key events to suitable keys on the device. For example, a device
with four-way navigation and a select key in the middle could use
those keys, but a simpler device may use certain keys on the
numeric keypad (e.g., 2 , 4 ,
5 , 6 , 8 ). These game events
allow development of portable applications that use the low-level
events. The API defines a set of abstract key-events:
UP , DOWN , LEFT ,
RIGHT , FIRE , GAME_A ,
GAME_B , GAME_C , and
GAME_D .
An application can get the mapping of the key events to abstract
key events by calling:
public static int getGameAction(int keyCode);
|
If the logic of the application is based on the values returned by
this method, the application is portable and run regardless of the
keypad design.
It is also possible to map an abstract event to a key with:
public static int getKeyCode(int gameAction);
|
where gameAction is
UP ,DOWN , LEFT ,
RIGHT , FIRE , etc. On some devices, more
than one key is mapped to the same game action, in which case the
getKeyCode method will return just one of them.
Properly-written applications should map the key code to an
abstract key event and make decisions based on the result.
The mapping between keys and abstract events does not change
during the execution of the game. The following is an
example of how an application can use game actions to interpret
keystrokes.
class MovingBlocksCanvas extends Canvas {
public void keyPressed(int keyCode) {
int action = getGameAction(keyCode);
switch (action) {
case LEFT:
moveBlockLeft();
break;
case RIGHT:
...
}
}
}
|
The low-level API also has support for pointer events, but since
the following input mechanisms may not be present in all devices,
the following callback methods may never be called in some
devices:
public void pointerPressed(int x, int y);
public void pointerReleased(int x, int y);
public void pointerDragged(int x, int y);
|
The application may check whether the pointer is available by calling
the following methods of class Canvas
:
public static boolean hasPointerEvents();
public static boolean hasPointerMotionEvents();
|
Interplay of High-Level Commands and the Low-Level API
The class Canvas , which is used for low-level events
and drawing, is a subclass of Displayable , and
applications can attach Commands to it. This is
useful for jumping to an options setup Screen in the
middle of a game. Another example could be a map-based navigation
application where keys are used for moving in the map but commands
are used for higher-level actions.
Some devices may not have the means to invoke commands when
Canvas and the low-level event mechanism are in use.
In that case, the implementation may provide a means to switch to
a command mode and back. This command mode might pop up a menu
over the contents of the Canvas . In this case, the
Canvas methods hideNotify() and
showNotify() will be called to indicate when the
Canvas has been obscured and unobscured,
respectively.
The Canvas may have a title and a Ticker
like the Screen objects. However,
Canvas also has a full-screen mode where the title
and the Ticker are not displayed. Setting this mode
indicates that the application wishes for the Canvas
to occupy as much of the physical display as is possible. In this
mode, the title may be reused by the implementation as the title
for pop-up menus. In normal (not full-screen) mode, the
appearance of the Canvas should be similar to that of
Screen classes, so that visual continuity is retained
when the application switches between low-level
Canvas objects and high-level Screen
objects.
Graphics and Text in Low-Level API
The Redrawing Scheme
Repainting is done automatically for all Screens ,
but not for Canvas
; therefore, developers utilizing the low-level API must
; understand its
repainting scheme.
In the low-level API, repainting of Canvas is done
asynchronously so that several repaint requests may be implemented
within a single call as an optimization. This means that the
application requests the repainting by calling the method
repaint() of class Canvas . The actual
drawing is done in the method paint()
-- which is provided by the subclass Canvas
-- and does not necessarily happen synchronously to
repaint()
. It may happen later, and several repaint requests may cause one
single call to paint() . The application can flush
the repaint requests by calling serviceRepaints()
.
As an example, assume that an application moves a box of width
wid
and height ht from coordinates (x1,y1 )
to coordinates (x2,y2 ), where x2>x1
and y2>y1 :
// move coordinates of box
box.x = x2;
box.y = y2;
// ensure old region repainted (with background)
canvas.repaint(x1,y1, wid, ht);
// make new region repainted
canvas.repaint(x2,y2, wid, ht);
// make everything really repainted
canvas.serviceRepaints();
|
The last call causes the repaint thread to be scheduled. The
repaint thread finds the two requests from the event queue and
repaints the region that is a union of the repaint area:
graphics.clipRect(x1,y1, (x2-x1+wid), (y2-y1+ht));
canvas.paint(graphics);
|
In this imaginary part of an implementation, the call
canvas.paint()
causes the application-defined paint()
method to be called.
Drawing Model
The primary drawing operation is pixel replacement, which is used
for geometric rendering operations such as lines and rectangles.
With offscreen images, support for full transparency is required,
and support for partial transparency (alpha blending) is
optional.
A 24-bit color model is provided with 8 bits each for the red,
green, and blue components of a color. Not all devices support
24-bit color, so they will map colors requested by the application
into colors available on the device. Facilities are provided in
the
Display
class for obtaining device characteristics, such as whether color
is available and how many distinct gray levels are available. This
enables applications to adapt their behavior to a device without
compromising device independence.
Graphics may be rendered either directly to the display or to an
off-screen image buffer. The destination of rendered graphics
depends on the origin of the graphics object. A graphics object
for rendering to the display is passed to the Canvas
object's paint() method. This is the only way to
obtain a graphics object whose destination is the
display. Furthermore, applications may draw by using this graphics
object only for the duration of the paint()
method.
A graphics object for rendering to an off-screen image buffer may
be obtained by calling the getGraphics() method on
the desired image. These graphics objects may be held indefinitely
by the application, and requests may be issued on these graphics
objects at any time.
The Graphics class has a current color that is set
with the setColor() method. All geometric rendering,
including lines, rectangles, and arcs, uses the current color.
The pixel representing the current color replaces the destination
pixel in these operations. There is no background color.
Painting of any background be performed explicitly by the
application using the setColor() and rendering
calls.
Support for full transparency is required, and support for partial
transparency (alpha blending) is optional. Transparency (both
full and partial) exists only in off-screen images loaded from PNG
files or from arrays of ARGB data. Images created in such a
fashion are immutable in that the application is
precluded from making any changes to the pixel data contained
within the image. Rendering is defined in such a way that the
destination of any rendering operation always consists entirely of
fully opaque pixels.
Coordinate System
The origin (0,0) of the available drawing area and
images is in the upper-left corner of the display. The numeric
values of the x-coordinates monotonically increase from left to
right, and the numeric values of the y-coordinates monotonically
increase from top to bottom. Applications may assume that
horizontal and vertical distances in the coordinate system
represent equal distances on the actual device display. If the
shape of the pixels of the device is significantly different from
square, the implementation of the UI will do the required
coordinate transformation. A facility is provided for translating
the origin of the coordinate system. All coordinates are specified
as integers.
The coordinate system represents locations between pixels, not the
pixels themselves. Therefore, the first pixel in the upper left
corner of the display lies in the square bounded by coordinates
(0,0), (1,0), (0,1), (1,1) .
An application may inquire about the available drawing area by calling
the following methods of Canvas
:
public static final int getWidth();
public static final int getHeight();
|
Font Support
An application may request one of the font attributes specified
below. However, the underlying implementation may use a subset of
what is specified. So it is up to the implementation to return a
font that most closely resembles the requested font.
Each font in the system is implemented individually. A programmer
will call the static getFont() method instead of
instantiating new Font objects. This paradigm
eliminates the garbage creation normally associated with the use
of fonts.
The Font class provides calls that access font
metrics. The following attributes may be used to request a font
(from the
Font
class):
-
Size:
SMALL , MEDIUM ,
LARGE .
-
Face:
PROPORTIONAL , MONOSPACE ,
SYSTEM .
-
Style:
PLAIN , BOLD , ITALIC ,
UNDERLINED .
Concurrency
The UI API has been designed to be thread-safe. The methods may be
called from callbacks, TimerTasks , or other threads created
by the application. Also, the implementation generally does not hold any
locks on objects visible to the application. This means that the
applications' threads can synchronize with themselves and with the event
callbacks by locking any object according to a synchronization policy
defined by the application. One exception to this rule occurs with the
{@link javax.microedition.lcdui.Canvas#serviceRepaints
Canvas.serviceRepaints} method. This method calls and awaits
completion of the paint method. Strictly speaking,
serviceRepaints might not call paint
directly, but instead it might cause another thread to call
paint . In either case, serviceRepaints
blocks until paint has returned. This is a significant
point because of the following case. Suppose the caller of
serviceRepaints holds a lock that is also needed by the
paint method. Since paint might be called
from another thread, that thread will block trying to acquire the lock.
However, this lock is held by the caller of serviceRepaints ,
which is blocked waiting for paint to return. The result
is deadlock. In order to avoid deadlock, the caller of
serviceRepaints must not hold any locks
needed by the paint method.
The UI API includes also a mechanism similar to other UI toolkits
for serializing actions with the event stream. The method
{@link javax.microedition.lcdui.Display#callSerially Display.callSerially}
requests that the run method of a Runnable
object be called, serialized with the event stream. Code that uses
serviceRepaints() can usually be rewritten to use
callSerially() . The following code illustrates
this technique:
class MyCanvas extends Canvas {
void doStuff() {
// <code fragment 1>
serviceRepaints();
// <code fragment 2>
}
}
|
The following code is an alternative way of implementing the same
functionality:
class MyClass extends Canvas
implements Runnable {
void doStuff() {
// <code fragment 1>
callSerially(this);
}
// called only after all pending repaints served
public void run() {
// <code fragment 2>;
}
}
|
Implementation Notes
The implementation of a List or
ChoiceGroup may include keyboard shortcuts for
focusing and selecting the choice elements, but the use of these
shortcuts is not visible to the application program.
In some implementations the UI components -- Screens
and Items -- will be based on native components. It
is up to the implementation to free the used resources when the
Java objects are not needed anymore. One possible implementation
scenario is a hook in the garbage collector of KVM.
@since MIDP 1.0
|