Dec 22, 2009

Exploring Java: Chapter 6, Threads

http://oreilly.com/catalog/expjava/excerpt/index.html


Contents:
Introducing Threads
Threading Applets
Synchronization
Scheduling and Priority

Threads have been around for some time, but few programmers have actually worked with them. There is even some debate over whether or not the average programmer can use threads effectively. In Java, working with threads can be easy and productive. In fact, threads provide the only way to effectively handle a number of tasks. So it's important that you become familiar with threads early in your exploration of Java.

Threads are integral to the way Java works. We've already seen that an applet's paint() method isn't called by the applet itself, but by another thread within the interpreter. At any given time, there may be many such background threads, performing activities in parallel with your application. In fact, it's easy to get a half dozen or more threads running in an applet without even trying, simply by requesting images, updating the screen, playing audio, and so on. But these things happen behind the scenes; you don't normally have to worry about them. In this chapter, we'll talk about writing applications that create and use their own threads explicitly.
Introducing Threads

Conceptually, a thread is a flow of control within a program. A thread is similar to the more familiar notion of a process, except that multiple threads within the same application share much of the same state--in particular, they run in the same address space. It's not unlike a golf course, which can be used by many players at the same time. Sharing the same address space means that threads share instance variables, but not local variables, just like players share the golf course, but not personal things like clubs and balls.

Multiple threads in an application have the same problems as the players sharing a golf course: in a word, synchronization. Just as you can't have two sets of players blindly playing the same green at the same time, you can't have several threads trying to access the same variables without some kind of coordination. Someone is bound to get hurt. A thread can reserve the right to use an object until it's finished with its task, just as a golf party gets exclusive rights to the green until it's done. And a thread that is more important can raise its priority, asserting its right to play through.

The devil is in the details, or course, and those details have historically made threads difficult to use. Java makes creating, controlling, and coordinating threads simple. When creating a new thread is the best way to accomplish some task, it should be as easy as adding a new component to your application.

It is common to stumble over threads when you first look at them, because creating a thread exercises many of your new Java skills all at once. You can avoid confusion by remembering there are always two players involved in running a thread: a Java language object that represents the thread itself and an arbitrary target object that contains the method the thread is to execute. Later, you will see that it is possible to play some sleight of hand and combine these two roles, but that special case just changes the packaging, not the relationship.
The Thread Class and the Runnable Interface

A new thread is born when we create an instance of the java.lang.Thread class. The Thread object represents a real thread in the Java interpreter and serves as a handle for controlling and synchronizing its execution. With it, we can start the thread, stop the thread, or suspend it temporarily. The constructor for the Thread class accepts information about where the thread should begin its execution. Conceptually, we would like to simply tell it what method to run, but since there are no pointers to methods in Java, we can't specify one directly. Instead, we have to take a short detour and use the Runnable interface to create an object that contains a "runnable" method.

An object that wants to serve as the target of a Thread can declare that it has an appropriate executable method by implementing the java.lang.Runnable interface. Runnable defines a single, general-purpose method:

public interface Runnable {
abstract public void run();
}

Every thread begins its life by executing a run() method in a particular object. run() is a rather mundane method that can hold an arbitrary body of code. It is public, takes no arguments, has no return value, and is not allowed to throw any exceptions.

Any class can contain an appropriate run() method, simply by declaring that it implements the Runnable interface. An instance of this class is then a runnable object that can serve as the target of a new Thread. In this way, we can effectively run a method in any object we want.
Creating and starting threads

A newly born Thread remains idle until we give it a figurative slap on the bottom by calling its start() method. The thread then wakes up and proceeds to execute the run() method of its target object. start() can be called only once in the lifetime of a Thread. Once a thread starts, it continues running until the target object's run() method completes, or we call the thread's stop() method to kill the thread permanently. A little later, we will look at some other methods you can use to control the thread's progress while it is running.

Now let's look at an example. The following class, Animation, implements a run() method to drive its drawing loop:

class Animation implements Runnable {
...
public void run() {

while ( true ) {
// Draw Frames
...
repaint();
}
}
}

To use it, we create a Thread object with an instance of Animation as its target object, and invoke its start() method. We can perform these steps explicitly, as in the following:

Animation happy = new Animation("Mr. Happy");
Thread myThread = new Thread( happy );
myThread.start();
...

Here we have created an instance of our Animation class and passed it as the argument to the constructor for myThread. When we call the start() method, myThread begins to execute Animation's run() method. Let the show begin!

The above situation is not terribly object oriented. More often, we want an object to handle its own thread, as shown in Figure 6-1.
Figure 6-1: Interaction between Animation and its Thread

[Graphic: Figure 6-1]

Figure 6-1 depicts a Runnable object that creates and starts its own Thread. We can have our Animation class perform these actions in its constructor:

class Animation implements Runnable {

Thread myThread;

Animation (String name) {
myThread = new Thread( this );
myThread.start();
}
...

In this case, the argument we pass to the Thread constructor is this, the current object instance. We keep the Thread reference in the instance variable myThread, in case we want to stop the show, or exercise some other kind of control.
A natural born thread

The Runnable interface lets us make an arbitrary object the target of a thread, as we did above. This is the most important, general usage of the Thread class. In most situations where you need to use threads, you'll create a class that implements the Runnable interface. I'd be remiss, however, if I didn't show you the other technique for creating a thread. Another design option is to make our target class a subclass of a type that is already runnable. The Thread class itself implements the Runnable interface; it has its own run() method we can override to make it do something useful:

class Animation extends Thread {
...

public void run() {
while (true ) {
// Draw Frames
...
repaint();
}
}
}

The skeleton of our Animation class above looks much the same as before, except that our class is now a kind of Thread. To go along with this scheme, the default (empty) constructor of the Thread class makes itself the default target. That is, by default, the Thread executes its own run() method when we call the start() method, as shown in Figure 6-2. Note that our subclass must override the run() method in the Thread class because Thread simply defines an empty run() method.
Figure 6-2: Animation as a subclass of Thread

[Graphic: Figure 6-2]

Now we create an instance of Animation and call its start() method:

Animation bouncy = new Animation("Bouncy");
bouncy.start();

Alternatively, we can have the Animation object start itself when it is created, as before:

class Animation extends Thread {

Animation (String name) {
start();
}
...

Here our Animation object just calls its own start() method when it is created.

Subclassing Thread probably seems like a convenient way to bundle a Thread and its target run() method. However, as always, you should let good object-oriented design dictate how you structure your classes. In most cases, a specific run() method is probably closely related to the functionality of a particular class in your application, so you should implement run() in that class. This technique has the added advantage of allowing run() to access any private variables and methods it might need in the class.

If you subclass Thread to implement a thread, you are saying you need a new type of object that is a kind of Thread. While there is something unnaturally satisfying about making an object primarily concerned with performing a single task (like animation), the actual situations where you'll want to create a subclass of Thread should be rather rare. If you find you're subclassing Thread left and right, you may want to examine whether you are falling into the design trap of making objects that are simply glorified functions.
Controlling Threads

We have seen the start() method used to bring a newly created Thread to life. Three other methods let us control a Thread's execution: stop(), suspend(), and resume(). None of these methods take any arguments; they all operate on the current thread object. The stop() method complements start(); it destroys the thread. start() and stop() can be called only once in the life of a Thread. By contrast, the suspend() and resume() methods can be used to arbitrarily pause and then restart the execution of a Thread.

Often, for simple tasks, it is easy enough to throw away a thread when we want to stop it and simply create a new one when want to proceed again. suspend() and resume() can be used in situations where the Thread's setup is very expensive. For example, if creating the thread involves opening a socket and setting up some elaborate communication, it probably makes more sense to use suspend() and resume() with this thread.

Another common need is to put a thread to sleep for some period of time. Thread.sleep() is a static method of the Thread class that causes the currently executing thread to delay for a specified number of milliseconds:

try {
Thread.sleep ( 1000 );
}
catch ( InterruptedException e ) {
}

Thread.sleep() throws an InterruptedException if it is interrupted by another Thread.[1] When a thread is asleep, or otherwise blocked on input of some kind, it doesn't consume CPU time or compete with other threads for processing. We'll talk more about thread priority and scheduling later.

[1] The Thread class contains an interrupt() method to allow one thread to interrupt another thread, but this functionality is not implemented in Java 1.0.

A Thread's Life

A Thread continues to execute until one of the following things happens:

* It returns from its target run() method
* It's interrupted by an uncaught exception
* Its stop() method is called

So what happens if the run() method for a thread never terminates, and the application that started the thread never calls its stop() method? The answer is that the thread lives on, even after the application that created it has finished. This means we have to be aware of how our threads eventually terminate, or an application can end up leaving orphaned threads that unnecessarily consume resources.

In many cases, what we really want is to create background threads that do simple, periodic tasks in an application. The setDaemon() method can be used to mark a Thread as a daemon thread that should be killed and discarded when no other application threads remain. Normally, the Java interpreter continues to run until all threads have completed. But when daemon threads are the only threads still alive, the interpreter will exit.

Here's a devilish example of using daemon threads:

class Devil extends Thread {

Devil() {
setDaemon( true );
start();
}

public void run() {
// Perform evil tasks
...
}
}

In the above example, the Devil thread sets its daemon status when it is created. If any Devil threads remain when our application is otherwise complete, Java kills them for us. We don't have to worry about cleaning them up.

Daemon threads are primarily useful in standalone Java applications and in the implementation of the Java system itself, but not in applets. Since an applet runs inside of another Java application, any daemon threads it creates will continue to live until the controlling application exits--probably not the desired effect.
Threading Applets

Applets are embeddable Java applications that are expected to be able to start and stop themselves on command. Unlike threads, applets can be started and stopped any number of times. A Java-enabled Web browser normally starts an applet when the applet is displayed and stops it when the user moves to another page or scrolls the applet out of view. In general, we would like an applet to cease its nonessential activity when it is stopped, and resume it when started again. (See Chapter 10, The Abstract Windowing Toolkit for a complete discussion of applets).

In this section, we will build UpdateApplet, a simple base class for an Applet that maintains a Thread to automatically update its display at regular intervals. UpdateApplet handles the basic starting and stopping behavior for us, as shown below.

public class UpdateApplet extends java.applet.Applet
implements Runnable {

private Thread updateThread;
int updateInterval = 1000;

public void run() {
while ( true ) {
try {
Thread.sleep( updateInterval );
}
catch (InterruptedException e ) { }

repaint();
}
}

public void start() {
if ( updateThread == null ) {
updateThread = new Thread(this);
updateThread.start();
}
}

public void stop() {
if ( updateThread != null ) {
updateThread.stop();
updateThread = null;
}
}
}

UpdateApplet is a Runnable object that alternately sleeps and calls its repaint() method. It has two other public methods: start() and stop(). These are methods of the Applet class we are overriding; do not confuse them with the similarly named methods of the Thread class. These start() and stop() methods are called by the Java environment to tell the applet when it should and should not be running.

UpdateApplet illustrates an environmentally friendly way to deal with threads in a simple applet. UpdateApplet kills its thread each time the applet is stopped and recreates it if the applet is restarted. When UpdateApplet's start() method is called, we first check to make sure there is no currently executing updateThread. We then create one to begin our execution. When our applet is subsequently stopped, we kill the thread by invoking its stop() method and throw away the reference by setting it to null. Setting updateThread to null serves both to allow the garbage collector to clean up the dead Thread object, and to indicate to UpdateApplet's start() method that the thread is gone.

In truth, an Applet's start() and stop() methods are guaranteed to be called in sequence. As a result, we shouldn't have to check for the existence of updateThread in start() (it should always be null). However, it's good programming practice to perform the test. If we didn't, and for some reason stop() were to fail at its job, we might inadvertently start a lot of threads.

With UpdateApplet doing all of the work for us, we can now create the world's simplest clock applet with just a few lines of code. Figure 6-3 shows our Clock. (This might be a good one to run on your Java wrist watch.).

public class Clock extends UpdateApplet {
public void paint( java.awt.Graphics g ) {
g.drawString( new java.util.Date().toString(), 10, 25 );
}
}

Figure 6-3: The clock applet

[Graphic: Figure 6-3]

The java.util.Date().toString() sequence simply creates a string that contains the current time; we'll see where this code comes from in Chapter 7, Basic Utility Classes.

Our Clock applet provides a good example of a simple thread; we don't mind throwing it away and subsequently rebuilding it if the user should happen to wander on and off of our Web page a few times. But what if the task that our thread handles isn't so simple? What if, for instance, we have to open a socket and establish a connection with another system? One solution is to use Thread's suspend() and resume() methods, as I'll show you momentarily.

Now if you've been wondering why we've been using stop() to kill the thread, rather than using the suspend() and resume() methods all along, here's the explanation you've been waiting for. The problem with applets is that we have no control over how a user navigates Web pages. For example, say a user scrolls our applet out of view, and we use suspend() to suspend the applet. Now we have no way of ensuring that the user will bring the applet back into view before moving on to another page. And actually, the same situation would occur if the user simply moves on to another page and never comes back.

If we call suspend(), we'd really like to make sure we call resume() at a later date, or we'll end up leaving the thread hanging in permanent suspense. But we have no way of knowing if the applet will ever be restarted, so just putting a call to resume() in the applet's start() method won't work. Leaving the suspended thread around forever might not hurt us, but it's not good programming practice to be wasteful. What we need is a way to guarantee we can clean up our mess if the applet is never used again. What to do?

There is a solution for this dilemma, but in many cases, like with our simple Clock, it's just easier to use stop(), with a subsequent call to start() if necessary. In cases where it is expensive to set up and tear down a thread, we could make the following modifications to UpdateApplet:

public void start() {
if ( updateThread == null ) {
updateThread = new Thread(this);
updateThread.start();
}
else
updateThread.resume();
}

public void stop() {
updateThread.suspend();

public void destroy() {
if ( updateThread != null ) {
updateThread.stop();
updateThread = null;
}
}

These modifications change UpdateApplet so that it suspends and restarts its updateThread, rather than killing and recreating it. The new start() method creates the thread and calls start() if updateThread is null; otherwise it assumes that the thread has been suspended, so it calls resume(). The applet's stop() method simply suspends the thread by calling suspend().

What's new here is the destroy() method. This is another method that UpdateApplet inherits from the Applet class. The method is called by the Java environment when the applet is going to be removed (often from a cache). It provides a place where we can free up any resources the applet is holding. This is the perfect place to cut the suspense and clean up after our thread. In our destroy() method, we check to see that the thread exists, and if it does, we call stop() to kill it and set its reference to null.
Synchronization

Every thread has a life of its own. Normally, a thread goes about its business without any regard for what other threads in the application are doing. Threads may be time-sliced, which means they can run in arbitrary spurts and bursts as directed by the operating system. On a multiprocessor system, it is even possible for many different threads to be running simultaneously on different CPUs. This section is about coordinating the activities of two or more threads, so they can work together and not collide in their use of the same address space.

Java provides a few simple structures for synchronizing the activities of threads. They are all based on the concept of monitors, a widely used synchronization scheme developed by C.A.R. Hoare. You don't have to know the details about how monitors work to be able to use them, but it may help you to have a picture in mind.

A monitor is essentially a lock. The lock is attached to a resource that many threads may need to access, but that should be accessed by only one thread at a time. It's not unlike a public restroom at a gas station. If the resource is not being used, the thread can acquire the lock and access the resource. By the same token, if the restroom is unlocked, you can enter and lock the door. When the thread is done, it relinquishes the lock, just as you unlock the door and leave it open for the next person. However, if another thread already has the lock for the resource, all other threads have to wait until the current thread finishes and releases the lock, just as if the restroom is locked when you arrive, you have to wait until the current occupant is done and unlocks the door.

Fortunately, Java makes the process of synchronizing access to resources quite easy. The language handles setting up and acquiring locks; all you have to do is specify which resources require locks.
Serializing Methods

The most common need for synchronization among threads in Java is to serialize their access to some resource, namely an object. In other words, synchronization makes sure only one thread at a time can perform certain activities that manipulate an object. In Java, every object has a lock associated with it. To be more specific, every class and every instance of a class has its own lock. The synchronized keyword marks places where a thread must acquire the lock before proceeding.

For example, say we implemented a SpeechSynthesizer class that contains a say() method. We don't want multiple threads calling say() at the same time or we wouldn't be able to understand anything being said. So we mark the say() method as synchronized, which means that a thread has to acquire the lock on the SpeechSynthesizer object before it can speak:

class SpeechSynthesizer {

synchronized void say( String words ) {
// Speak
}
}

Because say() is an instance method, a thread has to acquire the lock on the particular SpeechSynthesizer instance it is using before it can invoke the say() method. When say() has completed, it gives up the lock, which allows the next waiting thread to acquire the lock and run the method. Note that it doesn't matter whether the thread is owned by the SpeechSynthesizer itself or some other object; every thread has to acquire the same lock, that of the SpeechSynthesizer instance. If say() were a class (static) method instead of an instance method, we could still mark it as synchronized. But in this case as there is no instance object involved, the lock would be on the class object itself.

Often, you want to synchronize multiple methods of the same class, so that only one of the methods modifies or examines parts of the class at a time. All static synchronized methods in a class use the same class object lock. By the same token, all instance methods in a class use the same instance object lock. In this way, Java can guarantee that only one of a set of synchronized methods is running at a time. For example, a SpreadSheet class might contain a number of instance variables that represent cell values, as well as some methods that manipulate the cells in a row:

class SpreadSheet {

int cellA1, cellA2, cellA3;

synchronized int sumRow() {
return cellA1 + cellA2 + cellA3;
}

synchronized void setRow( int a1, int a2, int a3 ) {
cellA1 = a1;
cellA2 = a2;
cellA3 = a3;
}
...
}

In this example, both methods setRow() and sumRow() access the cell values. You can see that problems might arise if one thread were changing the values of the variables in setRow() at the same moment another thread was reading the values in sumRow(). To prevent this, we have marked both methods as synchronized. When threads are synchronized, only one will be run at a time. If a thread is in the middle of executing setRow() when another thread calls sumRow(), the second thread waits until the first one is done executing setRow() before it gets to run sumRow(). This synchronization allows us to preserve the consistency of the SpreadSheet. And the best part is that all of this locking and waiting is handled by Java; it's transparent to the programmer.

In addition to synchronizing entire methods, the synchronized keyword can be used in a special construct to guard arbitrary blocks of code. In this form it also takes an explicit argument that specifies the object for which it is to acquire a lock:

synchronized ( myObject ) {
// Functionality that needs to be synced
...
}

The code block above can appear in any method. When it is reached, the thread has to acquire the lock on myObject before proceeding. In this way, we can have methods (or parts of methods) in different classes synchronized the same as methods in the same class.

A synchronized method is, therefore, equivalent to a method with its statements synchronized on the current object. Thus:

synchronized void myMethod () {
...
}

is equivalent to:

void myMethod () {
synchronized ( this ) {
...
}
}

wait() and notify()

With the synchronized keyword, we can serialize the execution of complete methods and blocks of code. The wait() and notify() methods of the Object class extend this capability. Every object in Java is a subclass of Object, so every object inherits these methods. By using wait() and notify(), a thread can give up its hold on a lock at an arbitrary point, and then wait for another thread to give it back before continuing. All of the coordinated activity still happens inside of synchronized blocks, and still only one thread is executing at a given time.

By executing wait() from a synchronized block, a thread gives up its hold on the lock and goes to sleep. A thread might do this if it needs to wait for something to happen in another part of the application, as you'll see shortly. Later, when the necessary event happens, the thread that is running it calls notify() from a block synchronized on the same object. Now the first thread wakes up and begins trying to acquire the lock again.

When the first thread manages to reacquire the lock, it continues from the point it left off. However, the thread that waited may not get the lock immediately (or perhaps ever). It depends on when the second thread eventually releases the lock, and which thread manages to snag it next. Note also, that the first thread won't wake up from the wait() unless another thread calls notify(). There is an overloaded version of wait(), however, that allows us to specify a timeout period. If another thread doesn't call notify() in the specified period, the waiting thread automatically wakes up.

Let's look at a simple scenario to see what's going on. In the following example, we'll assume there are three threads--one waiting to execute each of the three synchronized methods of the MyThing class. We'll call them the waiter, notifier, and related threads, respectively. Here's a code fragment to illustrate:

class MyThing {

synchronized void waiterMethod() {
// Do some stuff

// Now we need to wait for notifier to do something
wait();

// Continue where we left off
}

synchronized void notifierMethod() {
// Do some stuff

// Notify waiter that we've done it
notify();

// Do more things
}

synchronized void relatedMethod() {
// Do some related stuff
}

Let's assume waiter gets through the gate first and begins executing waiterMethod(). The two other threads are initially blocked, trying to acquire the lock for the MyThing object. When waiter executes the wait() method, it relinquishes its hold on the lock and goes to sleep. Now there are now two viable threads waiting for the lock. Which thread gets it depends on several factors, including chance and the priorities of the threads. (We'll discuss thread scheduling in the next section).

Let's say that notifier is the next thread to acquire the lock, so it begins to run. waiter continues to sleep and related languishes, waiting for its turn. When notifier executes the call to notify(), Java prods the waiter thread, effectively telling it something has changed. waiter then wakes up and rejoins related in vying for the MyThing lock. Note that it doesn't actually receive the lock; it just changes from saying "leave me alone" to "I want the lock."

At this point, notifier still owns the lock and continues to hold it until it leaves its synchronized method (or perhaps executes a wait() itself). When it finally completes, the other two methods get to fight over the lock. waiter would like to continue executing waiterMethod() from the point it left off, while unrelated, which has been patient, would like to get started. We'll let you choose your own ending for the story.

For each call to notify(), Java wakes up just one method that is asleep in a wait() call. If there are multiple threads waiting, Java picks the first thread on a first-in, first-out basis. The Object class also provides a notifyAll() call to wake up all waiting threads. In most cases, you'll probably want to use notifyAll() rather than notify(). Keep in mind that notify() really means "Hey, something related to this object has changed. The condition you are waiting for may have changed, so check it again." In general, there is no reason to assume only one thread at a time is interested in the change or able to act upon it. Different threads might look upon whatever has changed in different ways.

Often, our waiter thread is waiting for a particular condition to change and we will want to sit in a loop like the following:

...
while ( condition != true )
wait();
...

Other synchronized threads call notify() or notifyAll() when they have modified the environment so that waiter can check the condition again. This is the civilized alternative to polling and sleeping, as you'll see the following example.
The Message Passer

Now we'll illustrate a classic interaction between two threads: a Producer and a Consumer. A producer thread creates messages and places them into a queue, while a consumer reads them out and displays them. To be realistic, we'll give the queue a maximum depth. And to make things really interesting, we'll have our consumer thread be lazy and run much slower than the producer. This means that Producer occasionally has to stop and wait for Consumer to catch up. The example below shows the Producer and Consumer classes.

import java.util.Vector;

class Producer extends Thread {
static final int MAXQUEUE = 5;
private Vector messages = new Vector();

public void run() {
try {
while ( true ) {
putMessage();
sleep( 1000 );
}
}
catch( InterruptedException e ) { }
}

private synchronized void putMessage()
throws InterruptedException {

while ( messages.size() == MAXQUEUE )
wait();
messages.addElement( new java.util.Date().toString() );
notify();
}

// Called by Consumer
public synchronized String getMessage()
throws InterruptedException {
notify();
while ( messages.size() == 0 )
wait();
String message = (String)messages.firstElement();
messages.removeElement( message );
return message;
}
}

class Consumer extends Thread {
Producer producer;

Consumer(Producer p) {
producer = p;
}

public void run() {
try {
while ( true ) {
String message = producer.getMessage();
System.out.println("Got message: " + message);
sleep( 2000 );
}
}
catch( InterruptedException e ) { }
}

public static void main(String args[]) {
Producer producer = new Producer();
producer.start();
new Consumer( producer ).start();
}
}

For convenience, we have included a main() method that runs the complete example in the Consumer class. It creates a Consumer that is tied to a Producer and starts the two classes. You can run the example as follows:

% java Consumer

The output is the time-stamp messages created by the Producer:

Got message: Sun Dec 19 03:35:55 CST 1996
Got message: Sun Dec 19 03:35:56 CST 1996
Got message: Sun Dec 19 03:35:57 CST 1996
...

The time stamps initially show a spacing of one second, although they appear every two seconds. Our Producer runs faster than our Consumer. Producer would like to generate a new message every second, while Consumer gets around to reading and displaying a message only every two seconds. Can you see how long it will take the message queue to fill up? What will happen when it does?

Let's look at the code. We are using a few new tools here. Producer and Consumer are subclasses of Thread. It would have been a better design decision to have Producer and Consumer implement the Runnable interface, but we took the slightly easier path and subclassed Thread. You should find it fairly simple to use the other technique; you might try it as an exercise.

The Producer and Consumer classes pass messages through an instance of a java.util.Vector object. We haven't discussed the Vector class yet, but you can think of this one as a queue where we add and remove elements in first-in, first-out order. See Chapter 7 for more information about the Vector class.

The important activity is in the synchronized methods: putMessage() and getMessage(). Although one of the methods is used by the Producer thread and the other by the Consumer thread, they both live in the Producer class because they have to be synchronized on the same object to work together. Here they both implicitly use the Producer object's lock. If the queue is empty, the Consumer blocks in a call in the Producer, waiting for another message.

Another design option would implement the getMessage() method in the Consumer class and use a synchronized code block to explicitly synchronize on the Producer object. In either case, synchronizing on the Producer is important because it allows us to have multiple Consumer objects that feed on the same Producer.

putMessage()'s job is to add a new message to the queue. It can't do this if the queue is already full, so it first checks the number of elements in messages. If there is room, it stuffs in another time stamp. If the queue is at its limit however, putMessage() has to wait until there's space. In this situation, putMessage() executes a wait() and relies on the consumer to call notify() to wake it up after a message has been read. Here we have putMessage() testing the condition in a loop. In this simple example, the test probably isn't necessary; we could assume that when putMessage() wakes up, there is a free spot. However, this test is another example of good programming practice. Before it finishes, putMessage() calls notify() itself to prod any Consumer that might be waiting on an empty queue.

getMessage() retrieves a message for the Consumer. It enters a loop like the Producer's, waiting for the queue to have at least one element before proceeding. If the queue is empty, it executes a wait() and expects the producer to call notify() when more items are available. Notice that getMessage() makes its own unconditional call to notify(). This is a somewhat lazy way of keeping the Producer on its toes, so that the queue should generally be full. Alternatively, getMessage() might test to see if the queue had fallen below a low water mark before waking up the producer.

Now let's add another Consumer to the scenario, just to make things really interesting. Most of the necessary changes are in the Consumer class; the example below shows the code for the modified class.

class Consumer extends Thread {
Producer producer;
String name;

Consumer(String name, Producer producer) {
this.producer = producer;
this.name = name;
}

public void run() {
try {
while ( true ) {
String message = producer.getMessage();
System.out.println(name + " got message: " + message);
sleep( 2000 );
}
}
catch( InterruptedException e ) { }
}

public static void main(String args[]) {
Producer producer = new Producer();
producer.start();

// Start two this time
new Consumer( "One", producer ).start();
new Consumer( "Two", producer ).start();
}
}

The Consumer constructor now takes a string name, to identify each consumer. The run() method uses this name in the call to println() to identify which consumer received the message.

The only modification to make in the Producer code is to change the call to notify() in putMessage() to a call to notifyAll(). Now, instead of the consumer and producer playing tag with the queue, we can have many players waiting on the condition of the queue to change. We might have a number of consumers waiting for a message, or we might have the producer waiting for a consumer to take a message. Whenever the condition of the queue changes, we prod all of the waiting methods to reevaluate the situation by calling notifyAll(). Note, however, that we don't need to change the call to notify() in getMessage(). If a Consumer thread is waiting for a message to appear in the queue, it's not possible for the Producer to be simultaneously waiting because the queue is full.

Here is some sample output when there are two consumers running, as in the main() method shown above:

One got message: Wed Mar 20 20:00:01 CST 1996
Two got message: Wed Mar 20 20:00:02 CST 1996
One got message: Wed Mar 20 20:00:03 CST 1996
Two got message: Wed Mar 20 20:00:04 CST 1996
One got message: Wed Mar 20 20:00:05 CST 1996
Two got message: Wed Mar 20 20:00:06 CST 1996
One got message: Wed Mar 20 20:00:07 CST 1996
Two got message: Wed Mar 20 20:00:08 CST 1996
...

We see nice, orderly alternation between the two consumers, as a result of the calls to sleep() in the various methods. Interesting things would happen, however, if we were to remove all of the calls to sleep() and let things run at full speed. The threads would compete and their behavior would depend on whether or not the system is using time slicing. On a time-sliced system, there should be a fairly random distribution between the two consumers, while on a non-time-sliced system, a single consumer could monopolize the messages. And since you're probably wondering about time slicing, let's talk about thread priority and scheduling.
Scheduling and Priority

Java makes certain guarantees as to how its threads are scheduled. Every thread has a priority value. If, at any time, a thread of a higher priority than the current thread becomes runnable, it preempts the lower priority thread and begins executing. By default, threads at the same priority are scheduled round robin, which means once a thread starts to run, it continues until it does one of the following:

Sleeps

Calls Thread.sleep() or wait()

Waits for lock

Waits for a lock in order to run a synchronized method

Blocks on I/O

Blocks, for example, in a xread() or an accept() call

Explicitly yields control

Calls yield()

Terminates

Completes its target method or is terminated by a stop() call

This situation looks something like what's shown in Figure 6-4.
Figure 6-4: Priority preemptive, round robin scheduling

[Graphic: Figure 6-4]

Java leaves certain aspects of scheduling up to the implementation.[2] The main point here is that some, but not all, implementations of Java use time slicing on threads of the same priority.[3] In a time-sliced system, thread processing is chopped up, so that each thread runs for a short period of time before the context is switched to the next thread, as shown in Figure 6-5.

[3] As of Java Release 1.0, Sun's Java Interpreter for the Windows 95 and Windows NT platforms uses time slicing, as does the Netscape Navigator Java environment. Sun's Java 1.0 for the Solaris UNIX platforms doesn't.

[2] This implementation-dependent aspect of Java isn't a big deal, since it doesn't hurt for an implementation to add time slicing on top of the default round-robin scheduling. It's actually not hard to create a time-slicing effect by simply having a high-priority thread sleeping for a specified time interval. Every time it wakes up, it interrupts a lower-priority thread and causes processing to shift round robin to the next thread.

Higher priority threads still preempt lower priority threads in this scheme. The addition of time slicing mixes up the processing among threads of the same priority; on a multiprocessor machine, threads may even be run simultaneously. Unfortunately, this feature can lead to differences in your application's behavior.
Figure 6-5: Priority preemptive, time-sliced scheduling

[Graphic: Figure 6-5]

Since Java doesn't guarantee time slicing, you shouldn't write code that relies on this type of scheduling; any software you write needs to function under the default round-robin scheduling. But if you're wondering what your particular flavor of Java does, try the following experiment:

class Thready {
public static void main( String args [] ) {
new MyThread("Foo").start();
new MyThread("Bar").start();
}
}

class MyThread extends Thread {
String message;

MyThread ( String message ) {
this.message = message;
}

public void run() {
while ( true )
System.out.println( message );
}
}

The Thready class starts up two MyThread objects. Thready is a thread that goes into a hard loop (very bad form) and prints its message. Since we don't specify a priority for either thread, they both inherit the priority of their creator, so they have the same priority. When you run this example, you will see how your Java implementation does it scheduling. Under a round-robin scheme, only "Foo" should be printed; "Bar" never appears. In a time-slicing implementation, you should occasionally see the "Foo" and "Bar" messages alternate.
Priorities

Now let's change the priority of the second thread:

class Thready {
public static void main( String args [] ) {
new MyThread("Foo").start();
Thread bar = new MyThread("Bar");
bar.setPriority( Thread.NORM_PRIORITY + 1 );
bar.start();
}
}

As you might expect, this changes how our example behaves. Now you may see a few "Foo" messages, but "Bar" should quickly take over and not relinquish control, regardless of the scheduling policy.

Here we have used the setPriority() method of the Thread class to adjust our thread's priority. The Thread class defines three standard priority values, as shown in Table 6-1.

Table 6-1: Thread Priority Values

Value


Definition

MIN_PRIORITY


Minimum priority

NORM_PRIORITY


Normal priority

MAX_PRIORITY


Maximum priority

If you need to change the priority of a thread, you should use one of these values or a close relative value. But let me warn you against using MAX_PRIORITY or a close relative value; if you elevate many threads to this priority level, priority will quickly become meaningless. A slight increase in priority should be enough for most needs. For example, specifying NORM_PRIORITY + 1 in our example is enough to beat out our other thread.
Yielding

As I said earlier, whenever a thread sleeps, waits, or blocks on I/O, it gives up its time slot, and another thread is scheduled. So as long as you don't write methods that use hard loops, all threads should get their due. However, a Thread can also give up its time voluntarily with the yield() call. We can change our previous example to include a yield() on each iteration:

class MyThread extends Thread {
...

public void run() {
while ( true ) {
System.out.println( message );
yield();
}
}
}

Now you should see "Foo" and "Bar" messages alternating one for one. If you have threads that perform very intensive calculations, or otherwise eat a lot of CPU time, you might want to find an appropriate place for them to yield control occasionally. Alternatively, you might want to drop the priority of your intensive thread, so that more important processing can proceed around it.

Dec 18, 2009

SwingWorker details – canceling background tasks in flight

http://developerlife.com/tutorials/?p=38


Introduction

f you use the SwingWorker class to run background tasks that don’t freeze up the EDT (Event Dispatch Thread) in your Swing apps, this may be of interest. What happens when you cancel a long running operation that’s running the background? For eg, a user might generate an event that causes a SwingWorker to be created that starts running some code in the background. What happens when the user wants to cancel this long running background operation? This is what we will delve into in this tutorial and get the answer to this question.

You can learn more about threads in this chapter of the Concurrency in Practice book (on Safari Books Online). You can learn more about the Event Dispatch Thread (EDT) in this chapter of the Filthy Rich Clients (on Safari Books Online).
The setup

Let’s say you have some code that you want to run in the background (on a thread that is not the EDT). So this is what you would do. You would subclass SwingWorker and put your code in the doInBackground() method. Here’s an example:

1: SwingWorker myWorker = new SwingWorker() {

2: protected String doInBackground() throws Exception {

3: while (!isCancelled()) {

4: //run some code in the background…

5: }

6: return “something”;

7: }

8: @Override protected void done() {

9: try {

10: String value = get();

11: }

12: catch (InterruptedException e) {}

13: catch (ExecutionException e) {}

14: catch (CancellationException e) {}

15: }

16: };

The first type parameter for our SwingWorker subclass is String, and this defines the type of object that is returned when done() is called… which happens when the background thread completes it’s execution. The second parameter is just Void, since I’m not going to use this SwingWorker to post intermediate results to the EDT for processing (while the background thread is running). Click here for more details on this. Sun’s Java Tutorial has more information on SwingWorker if you need more background information. Note the use of isCancelled() in the while loop… we will cover this in more detail in the sections that follow. The results of this background processing are retrieved on the EDT in the done() method – also note the exceptions, we will cover this in the next sections as well.

To run this snippet, all you have to do is:

1: myWorker.execute()

To cancel it, all you have to do is:

1: myWorker.cancel()

The tale of two threads

When you call execute() on myWorker, from the EDT or whatever thread the execute() call is running in, two things happen:

1. a thread is created that runs the SwingWorker (let’s call this Thread1).
2. another thread is created by this SwingWorker instance which runs your code in the background (let’s call this Thread2).

Let’s say that you want to cancel the background task because it is taking too long. You would then call cancel() on myWorker. When you call cancel() on the SwingWorker instance myWorker, the following things happen concurrently:

1. CancellationException gets raised on Thread1 (the SwingWorker thread). So the SwingWorker thread itself jumps out of waiting for doInBackground() method to end, and goes straight into done(). When the get() method is called, this causes a CancellationException to be thrown on the SwingWorker thread itself, and you can catch this in the CancellationException handler. So the SwingWorker thread ends its lifecyle at this point.
2. InterruptedException gets raised on Thread 2 (the thread that’s actually running your code in the background). If your code is not interruptible, or if you catch the InterruptedException and just keep going, then this thread will not die, and will continue doing it’s background processing! This is why it’s necessary to check to see if isCancelled() is true. This is the only way (outside of responding to an InterruptedException) that can cause the background thread to stop running your code. Also, when your background task completes execution and it returns the String, nothing will happen, since the SwingWorker (Thread1) that was supposed to respond to this (in it’s done() method) is already dead. If you use call sleep() or wait(), then these methods will respond to an InterruptedException being raised, otherwise, the only way to tell is by checking isCancelled(). So it’s pretty easy, if you’re not careful, for the underlying thread executing your background code and the SwingWorker thread itself to get out of “sync”. Also, if you have code that’s doing some network IO, you have to use an InputStream or OutputStream that can check the isCancelled() method to break the IO operation. If you can’t do this, then you can try closing the underlying IO streams and causing an IOException to occur when isCancelled() is detected.

Closing thoughts

In your Swing apps that use SwingWorker to perform lengthy background tasks, it’s necessary to keep in mind that just because you called cancel() on the SwingWorker doing your task in the background that it’s been “cancelled”. Java does not have preemptive multithreading – only cooperative. So it’s your onus to check the isCancelled() method in your doInBackground() code, and do the proper exception handling in the done() method of the SwingWoker to make sure that you don’t have a thread leak. Also, it’s important to process results from your background operation in the done() method – this will ensure that the 2 threads won’t go out of “sync”. Since the SwingWorker thread can be cancelled without the underlying execution thread knowing, it’s important to perform any changes to your system in the done() method – if the SwingWorker get’s cancelled, then these changes won’t show up in your system.
Training Services

Want to learn from the best in the business? We have training programs available for BlackBerry development, Android development, rich desktop apps, and UX design. We can give your development team a competitive edge, and make them experts in these technologies. Contact us if you are interesting in learning more about our training services & schedule.
Consulting Services

If you need help building web, mobile, and desktop apps that are all connected to the cloud, we can help. We can empower your business by bringing your ideas to life. Contact us if you have a project you need our help with and we will consider taking you on as a client.

Our expertise: architecture, UX design, graphic design, and implementation services for BlackBerry, Android, GWT, desktop Java, and cloud computing. Our specialties: crafting stunning UXes, LBS, real-time collaboration and syncing between services and web/mobile/desktop apps, location based mobile e-commerce, and location based mobile advertising.
Further reading in related categories
Graphics

Task API (3 of 5) - Monitoring HTTP POST operations
Using Task API to perform HTTP POST operation in the background, while monitoring the request and response I/O operation data streams.
Task API (2 of 5) - Task API in-depth
More details on the Task API introduced in the first Task API tutorial. SampleApp from the first tutorial is dissected under a microscope along with the API itself. Also contains information on which external libraries are optional and which are required.
Task API (1 of 5) - Quick Start Guide
Introducing the Task API. Easy to use background task API for Swing. Android and JavaME implementation coming soon. Easily create tasks and monitor their progress and cancel them at any time. Easily manage multiple tasks. Create network aware tasks and recurring tasks, and much much more! The API is open source (Apache 2.0 license). Enjoy!!!
SwingX Tutorial - Busy Label (JXBusyLabel)
This tutorial will show you how to use SwingX's JXBusyLabel component to display an indeterminate progress indicator. It will also show you advanced configuration options that allow you to create different and interesting indeterminate progress indicators using the BusyPainter.
SwingX Tutorial - Task Pane (JXTaskPane, Container)
This tutorial will walk you through the steps required to use JXTaskPane and JXTaskPaneContainer in SwingX. You will learn how to change the default color schemes of these components, and add components and actions to task panes.
SwingX Tutorial - Painters
This tutorial will introduce you to the SwingX API and the concept of Painters. It will give you an idea of the kinds of effects you can create with them as well, with code examples.
How to use the AnimatedTransition API (SwingX and Timingframework)
I needed to perform animations in the app that I'm building (http://screamingtoaster.com). I needed to build animations that show a transition from one screen to another. This is slightly different than creating custom, or modified components which perform a function and have a set of graphical effects. I needed animations that would transition my user interface from one "screen" to the next. The screens themselves could be panels or components (part of the whole app, or the entire app itself). While I'd been writing much of this code myself, to do these animations, it just got really tedious and frustrating to add this level of complexity to my code, when all I needed were some simple animations. I've been using the SwingX API and the TimingFramework API to perform the animations and leverage the components, however, this last piece was missing. And this last piece just got delivered by Chet Haase, as a part of the binary deliverables with his (and Romain Guy's) great book - Filthy Rich Clients.
How to use glass pane for animation (SwingX and Timingframework)
I needed to perform animations in the app that I'm building (http://screamingtoaster.com). I needed to build animations that move various components around on the screen, and other animations that pop up components on top of existing components, etc. After creating a few of these effects, I realized that I was doing the same thing over and over again, which is why I decided to write this tutorial to encapsulate this pattern, in the hopes that I will help others doing the same thing.
Creating multi-threaded Swing apps that consume web services
If you've ever want to incorporate web services into your graphical applications/applets/widgets written in Java, then there are some threading issues that you have to be mindful of, and design around. This tutorial will guide you though some of the important threading issues you have to keep in mind when building such applications. The strategies outlined in this tutorial apply to accessing more than just web services from Swing apps; it also applies to loading information from databases, and performing any other kind of time consuming process that has to happen in the desktop app and interact with it, but can't make the user interface unresponsive.

Multithreading, Concurrency

How to build a service-enabled Android app - Part 3/3 Multithreading
I've written 3 tutorials to show you how to create a service enabled Android application that performs all of it's network I/O in a background thread (not the UI thread). These tutorials are split into three parts. This tutorial shows you how to use background threads to perform long running network IO operations, so that the main UI thread is not locked up.
Task API (3 of 5) - Monitoring HTTP POST operations
Using Task API to perform HTTP POST operation in the background, while monitoring the request and response I/O operation data streams.
Task API (2 of 5) - Task API in-depth
More details on the Task API introduced in the first Task API tutorial. SampleApp from the first tutorial is dissected under a microscope along with the API itself. Also contains information on which external libraries are optional and which are required.
Task API (1 of 5) - Quick Start Guide
Introducing the Task API. Easy to use background task API for Swing. Android and JavaME implementation coming soon. Easily create tasks and monitor their progress and cancel them at any time. Easily manage multiple tasks. Create network aware tasks and recurring tasks, and much much more! The API is open source (Apache 2.0 license). Enjoy!!!
Creating multi-threaded Swing apps that consume web services
If you've ever want to incorporate web services into your graphical applications/applets/widgets written in Java, then there are some threading issues that you have to be mindful of, and design around. This tutorial will guide you though some of the important threading issues you have to keep in mind when building such applications. The strategies outlined in this tutorial apply to accessing more than just web services from Swing apps; it also applies to loading information from databases, and performing any other kind of time consuming process that has to happen in the desktop app and interact with it, but can't make the user interface unresponsive.
Introduction to Java 5 java.util.concurrent API
Introduction to the Java5 Concurrency API
Advanced Threads
Advanced threading topics.
Introduction to Threads
Introduction to multithreading in Java.

Dec 14, 2009

P,NP,NP-complete,NP-hard

转贴 P,NP,NP-complete,NP-hard 收藏

NP问题就是指其解的正确性可以在多项式时间内被检查的一类问题。比如说数组求和,得到一个解,这个解对不对呢,显然是可以在多项式时间内验证的。再比如说SAT,如果得到一个解,也是能在多项式时间内验证正确性的。所以SAT和求和等等都是NP问题。然后呢,有一部分NP问题的解已经可以在多项式时间内找到,比如数组求和,这部分问题就是NP中比较简单的一部分,被命名为P类问题。那么P以外的NP问题,就是目前还不能够在多项式时间内求解的问题了。会不会将来某一天,有大牛发明了牛算法,把这些问题都在多项式时间内解决呢?也就是说,会不会所有的NP问题,其实都是P类问题呢,只是人类尚未发现呢?NP=P吗?

可想而知,证明NP=P的路途是艰难的,因为NP问题实在太多了,要一一找到多项式算法。这时Stephen A. Cook这位大牛出现了,写了一篇The Complexity of Theorem Proving Procedures,提出了一个NP-complete的概念。NPC指的是NP问题中最难的一部分问题,所有的NP问题都能在多项式时间内归约到NPC上。所谓归约是指,若A归约到B,B很容易解决,则A很容易解决。显然,如果有任何一道NPC问题在多项式时间内解决了,那么所有的NP问题就都成了P类问题,NP=P就得到证明了,这极大的简化了证明过程。那么怎样证明一个问题C是NP完全问题呢?首先,要证明C是NP问题,也就是C的解的正确性容易验证;然后要证明有一个NP完全问题B,能够在多项式时间内归约到C。这就要求必须先存在至少一个NPC问题。这时Cook大牛就在1971年证明了NP完全问题的祖先就是SAT。SAT问题是指给定一个包含n个布尔变量的逻辑式,问是否存在一个取值组合,使得该式被满足。Cook证明了SAT是一个NPC问题,如果SAT容易解决,那么所有NP都容易解决。Cook是怎样做到的呢?

他通过非确定性图灵机做到的。非确定性图灵机是一类特殊的图灵机,这种机器很会猜,只要问题有一个解,它就能够在多项式时间内猜到。Cook 证明了,SAT总结了该机器在计算过程中必须满足的所有约束条件,任何一个NP问题在这种机器上的计算过程,都可以描述成一个SAT问题。所以,如果你能有一个解决SAT的好算法,你就能够解决非确定性图灵机的计算问题,因为NP问题在非图机上都是多项式解决的,所以你解决了SAT,就能解决所有NP,因此——SAT是一个NP完全问题。感谢Cook,我们已经有了一个NPC问题,剩下的就好办了,用归约来证明就可以了。目前人们已经发现了成千上万的NPC问题,解决一个,NP=P就得证,可以得千年大奖(我认为还能立刻获得图灵奖)。

那么肯定有人要问了,那么NP之外,还有一些连验证解都不能多项式解决的问题呢。这部分问题,就算是NP=P,都不一定能多项式解决,被命名为NP-hard问题。NP-hard太难了,怎样找到一个完美的女朋友就是NP- hard问题。一个NP-hard问题,可以被一个NP完全问题归约到,也就是说,如果有一个NP-hard得到解决,那么所有NP也就都得到解决了。

NP-Hard和NP-Complete 区别

对NP-Hard问题和NP-Complete问题的一个直观的理解就是指那些很难(很可能是不可能)找到多项式时间算法的问题. 因此一般初学算法的人都会问这样一个问题: NP-Hard和NP-Complete有什么不同? 简单的回答是根据定义, 如果所有NP问题都可以多项式归约到问题A, 那么问题A就是NP-Hard; 如果问题A既是NP-Hard又是NP, 那么它就是NP-Complete. 从定义我们很容易看出, NP-Hard问题类包含了NP-Complete类. 但进一步的我们会问, 是否有属于NP-Hard但不属于NP-Complete的问题呢? 答案是肯定的. 例如停机问题, 也即给出一个程序和输入, 判定它的运行是否会终止. 停机问题是不可判的, 那它当然也不是NP问题. 但对于SAT这样的NP-Complete问题, 却可以多项式归约到停机问题. 因为我们可以构造程序A, 该程序对输入的公式穷举其变量的所有赋值, 如果存在赋值使其为真, 则停机, 否则进入无限循环. 这样, 判断公式是否可满足便转化为判断以公式为输入的程序A是否停机. 所以, 停机问题是NP-Hard而不是NP-Complete.

让我冒着出错被人砸版砖的危险来解释一下P/NP/NP-Complete/NP-Hard。
  
  1,计算复杂性
  这是描述一种算法需要多少“时间”的度量。(也有空间复杂性,但因为它们能相互转换,所以通常我们就说时间复杂性。对于大小为 n 的输入,我们用含 n 的简化式子来表达。(所谓简化式子,就是忽略系数、常数,仅保留最“大”的那部分)
  比如找出 n 个数中最大的一个,很简单,就是把第一个数和第二个比,其中大的那个再和第三个比,依次类推,总共要比 n-1 次,我们记作 O(n) (对于 n 可以是很大很大的情况下,-1可以忽略不计了)。
  再比如从小到大排好的 n 个数,从中找出等于 x 的那个。一种方法是按着顺序从头到尾一个个找,最好情况是第一个就是 x,最坏情况是比较了 n 次直最后一个,因此最坏情况下的计算复杂度也是 O(n)。还有一种方法:先取中间那个数和 x 比较,如偏大则在前一半数中找,如偏小则在后一半数中找,每次都是取中间的那个数进行比较,则最坏情况是 lg(n)/lg2。忽略系数lg2,算法复杂度是O(lgn)。
  
  2,计算复杂性的排序:
  根据含 n 的表达式随 n 增大的增长速度,可以将它们排序:1 < lg(n) < n < nlg(n) < n^2 < ... < n^k (k是常数)< ... < 2^n。最后这个 2 的 n 次方就是级数增长了,读过棋盘上放麦粒故事的人都知道这个增长速度有多快。而之前的那些都是 n 的多项式时间的复杂度。为什么我们在这里忽略所有的系数、常数,例如 2*n^3+9*n^2 可以被简化为 n^3?用集合什么的都能解释,我忘了精确的说法了。如果你还记得微积分的话就想像一下对 (2*n^3+9*n^2)/(n^3) 求导,结果是0,没区别,对不?
  
  2,P 问题:对一个问题,凡是能找到计算复杂度可以表示为多项式的确定算法,这个问题就属于 P (polynomial) 问题。
  
  3,NP 问题:
  NP 中的 N 是指非确定的(non-deterministic)算法,这是这样一种算法:(1)猜一个答案。(2)验证这个答案是否正确。(3)只要存在某次验证,答案是正确的,则该算法得解。
  NP (non-deterministic polynomial)问题就是指,用这样的非确定的算法,验证步骤(2)有多项式时间的计算复杂度的算法。
  
  4,问题的归约:
  这……我该用什么术语来解释呢?集合?太难说清了……如果你还记得函数的映射的话就比较容易想象了。
  大致就是这样:找从问题1的所有输入到问题2的所有输入的对应,如果相应的,也能有问题2的所有输出到问题1的所有输出的对应,则若我们找到了问题2的解法,就能通过输入、输出的对应关系,得到问题1的解法。由此我们说问题1可归约到问题2。
  
  5,NP-Hard:
  有这样一种问题,所有 NP 问题都可以归约到这种问题,我们称之为 NP-hard 问题。
  
  6,NP完全问题 (NP-Complete):
  如果一个问题既是 NP 问题又是 NP-Hard 问题,则它是 NP-Complete 问题。可满足性问题就是一个 NP 完全问题,此外著名的给图染色、哈密尔顿环、背包、货郎问题都是 NP 完全问题。
  
  从直觉上说,P<=NP<=NP-Complete<=NP-Hard,问题的难度递增。但目前只能证明 P 属于 NP,究竟 P=NP 还是 P 真包含于 NP 还未知。

Aug 28, 2009

Hide Function Pointer Declarations With a typedef

Can you tell what the following declaration means?


void (*p[10]) (void (*)() );

Only few programmers can tell that p is an "array of 10 pointers to a function returning void and taking a pointer to another function that returns void and takes no arguments." The cumbersome syntax is nearly indecipherable. However, you can simplify it considerably by using typedef declarations. First, declare a typedef for "pointer to a function returning void and taking no arguments" as follows:


typedef void (*pfv)();

Next, declare another typedef for "pointer to a function returning void and taking a pfv" based on the typedef we previously declared:


typedef void (*pf_taking_pfv) (pfv);

Now that we have created the pf_taking_pfv typedef as a synonym for the unwieldy "pointer to a function returning void and taking a pfv", declaring an array of 10 such pointers is a breeze:


pf_taking_pfv p[10];

Aug 13, 2009

eclipse + visual studio

http://neora.javaeye.com/blog/192626
http://groups.google.com/group/bmw-network-security-visualization/browse_thread/thread/6a7367cb596eb30d/10a9038a8d94face?lnk=gst&q=enviroment+variables#10a9038a8d94face
http://www3.ntu.edu.sg/home/ehchua/programming/howto/Eclipse_cpp_HowTo.html

Jul 20, 2009

GSM与CDMA

2002年之前,GSM系统将主宰整个移动通信市场,CDMA系统将成为继GSM之后的第二大
系统,而TDMA系统的占有率预计将只有11%。
一、三种多址方式
在现代蜂窝移动通信系统中,如何在有限的频率资源下,使更多的用户在同一时刻进
行通信,历来是一个非常关键的问题。一般而言,移动通信系统按其多址方式可分为以下
三种:
(1) 频分多址(FDMA:Frequency Division Multiple Access)。FDMA通过使用划分的
频率段,保证系统可被多个用户同时使用。传统的模拟无线通信设备就是使用此种技术。
在此方式下,每一个用户在一定时间内分配到了一个固定频率用于通信,其它用户只有在
此用户使用结束后才能利用这一频段。
(2) 时分多址(TDMA: Time Division Multiple Access)。TDMA是数字无线通信系统
中使用的一种多址技术,它将常用的信道按时间分片,可提供更高的用户容量。我们日常
使用的GSM"全球通"手机就是使用此种技术(实际上,GSM使用的是一种FDMA与TDMA结合的
技术)。与FDMA技术类似,一个TDMA信道在被占用通话时,其它用户不能使用此资源。
(3) 码分多址(CDMA: Code Division Multiple Access)。CDMA的原理同以上两种通
信方式有着本质的不同。它基于扩频通信原理,分配给每一个用户唯一的、特定的一个编
码,此编码被称为伪随机编码。使用此编码,控制扩频调制,即能作到互不干扰地通信,使
每个用户可在同一时刻在同一频率段上进行通话。它的用户容量是FDMA系统容量的10~
20倍,是TDMA系统用户容量的4~7倍。
由此可见,CDMA是解决日益扩大的用户群与愈发紧张的频带资源之间矛盾的有效手段
。同时由于扩频通信本身的一些特色,使移动通信的安全性、可靠性大大提高,而且由于
CDMA使用伪随机码区分不同用户的话路,而不用像在FDMA与TDMA中那样必须预留一部分资
源以区别不同用户(例如,不同用户所用频段间必须有一定间隔),因而大大提高了资源利
用率。使用CDMA还可以提供更高的通话质量,良好的抗噪性能,可使用户大感头疼的掉线
现象大大减少,并明显提高手机电池的使用时间。
综上所述,从技术角度看,CDMA有着其它技术所不可比拟的发展潜力。
二、发展的限制因素
CDMA技术虽然有许多优势,但技术优势并不等于市场优势;因为要真正评价一个移动
通信系统,还要包括其它诸如移动设备费用、基站费用、对新型服务的支持能力以及对现
存系统的使用等诸多因素。从目前状况来看,GSM技术已经进入成熟期,GSM标准已经被世
界众多国家和地区所采纳,许多大型通信公司已经在全球各地建立起了GSM网络,并且GSM
技术也随着电子技术的发展而壮大,所以说它是"全球通"并不过份。
反观CDMA技术虽然新潮,但由于网络初建所需资金较大,技术发展时间较短,目前在全
球范围内用户数目不大,通信覆盖范围小,使用还不够普及。所以,GSM在市场上占据主导
地位还将持续一段时间,而CDMA只能屈居第二。
三、市场发展趋势
根据一份新的研究报告,目前世界上使用最广泛的无线数字式蜂窝电话系统GSM,至少
在2002年之前,都将主宰整个市场。到1997年为止,GSM的用户已经达到6400万的规模,与
其他类似系统(包括美国广为使用的CDMA系统)比较,GSM的用户数目是其他系统用户数的
两倍多。
根据美国Yankee Group所提出的无线通信市场趋势(Global Trends of Cellular M
arkets)报告估计,到2002年为止,GSM将会拥有2.48亿的用户;与此同时,使用另外两种主
要系统的用户总和也将达到1.51亿。
在1997年,模拟式系统仍旧主导着整个无线移动通信市场。这份研究报告指出,模拟
系统用户数目约占整体的47%,但是GSM正计划在其后的四年中逐渐增加其市场占有率(当
然其目标之一就是锁定这些模拟系统用户),希望在2002年仍能够达到整个移动通信系统
市场47%的占有率,届时模拟式系统的使用率将降至16.4%。
逐渐受到重视的CDMA系统虽然目前全球的用户只有600万,但根据这份报告的预估,到
2002年,CDMA系统的用户总数将会增加到9200万,成为继GSM之后的第二大系统。
CDMA系统未来是否会成功,将与它在亚洲市场的发展情况息息相关,特别是中国的市
场。这份报告指出:"中国将决定CDMA是否能够成为和GSM一样规模的世界性无线通信标准
"。一位分析员表示,欧洲的GSM在世界广受欢迎,就如同欧洲在这场无线通信市场的争夺
战中获得胜利。虽然GSM在市场上占有重要的份量,但从目前看来,由于各厂商之间仍然存
有许多歧见,因此未来在数字无线系统的标准上可能不会出现一个世界性单一标准。
除CDMA之外,世界上有5%的用户使用TDMA系统。这份报告中也预测,该系统在2002年
将会有11%(也就是约有5900万的用户)的占有率。
移动通信系统的用户总数从1995年的8500万,急速增加到1997年的1.99亿美元,未来
的市场潜力无穷。根据Yankee Group的预测,到2002年,移动通信系统的用户将会达到5.
29亿,与1997年约870亿美元的市场规模相比,在2002年全世界移动通信产业的产值将高达
3100亿美元。
四、我国的应用情况
目前中国共有移动通信用户1000万,用户数位居世界第三位。随着国内电信事业的不
断发展和壮大,中国移动通信用户数目还会近一步扩大。近年来,移动通信技术日益更新
,新技术不断涌现。目前,国内市场上除了占统治地位的使用 GSM技术的手机,俗称"全球
通"外,还出现了新型的使用CDMA技术的手机。
由于目前国内CDMA网只在北京、上海、广州、深圳四个城市内进行实验,而 GSM网已
经在国内数十个城市中开通,已经建成的网络在很长时间内不会因为技术的进步而遭淘汰
,所以只要通话质量、服务内容能够保证,那么现在选择购置 GSM手机是明智之举。全国
,乃至全球CDMA网络建成后,CDMA将会在我国迅速发展。

Jun 9, 2009

Jun 2, 2009

ofdma ofdm

link
link2

严格的讲,OFDM是一种调制方式,类似于QPSK、16QAM等,用于对信息比特调制成码片发送出去
而OFDMA是一种 多址接入 方式,类似于 FDMA 等,利用频率的不同,将同一小区的多个用户区分开来
举个最简单的例子(不考虑TDMA)
同一个小区内有 100 个 子载波可用,有 10 个用户
可以有多种方案,下面举两种最简单的方案
(1) 将前 10 个子载波分给第一个用户,第 11~20 个子载波分给第二个用户,……
而每个用户的编码方式都采用了 10 载波的 OFDM 调制方式
(2) 将子载波 1、11、21、…、91 分给第一个用户,将子载波 2、12、22、…、92 分给第二个用户,…
同样每个用户的编码方式都采用了 10 载波的 OFDM 调制方式
当然,也各根据需要的不同,分给不同用户的子载波数不同

Apr 29, 2009

a very good blog about networking conference and paper

http://morningtheworld.spaces.live.com/default.aspx?wa=wsignin1.0&sa=741652380

Apr 23, 2009

a probem in smartdraw 2007

I can save the picture as an eps file, but the problem is when I include it in PDF file, the picture area is blank. I googled the solution about it and copy the solution as follows:

"我以前也遇到过类似的问题,我解决的办法是只选择include AI format,
AI Version 选7,不选Include TIFF preview,这样在yap下可以显示出来,也能转换成pdf"

Apr 11, 2009

Computer Science Conference Rankings

Computer Science Conference Rankings

DISCLAIMER:

  • The ranking of conferences are taken mostly from an informal external source. The detailed procedure behind the ranking is unknown to the author. These rankings do not necessarily represent my personal view either.
  • There is a possibility that some of the rankings may not be accurate, may not reflect current status of the conferences accurately, may not be complete, and there is no copyright. If you are looking for a scientifically accurate ranking please disregard this page. The author does not bear any responsibility if the ranking is inaccurate.
  • If you think this ranking offends you for some reason (e.g., you are not satisfied with the ranking of your conference), please just ignore the list. This is not an OFFICIAL list. This is ONLY FOR REFERENCE.
  • Some conferences accept multiple categories of papers. The rankings below are for the most prestigious category of paper at a given conference. All other categories should be treated as "unranked".

AREA: Databases


Rank 1:


SIGMOD: ACM SIGMOD Conf on Management of Data
PODS: ACM SIGMOD Conf on Principles of DB Systems
VLDB: Very Large Data Bases
ICDE: Intl Conf on Data Engineering

CIKM: Intl. Conf on Information and Knowledge Management
ICDT: Intl Conf on Database Theory


Rank 2:

SSD: Intl Symp on Large Spatial Databases
DEXA: Database and Expert System Applications
FODO: Intl Conf on Foundation on Data Organization
EDBT: Extending DB Technology
DOOD: Deductive and Object-Oriented Databases
DASFAA: Database Systems for Advanced Applications
SSDBM: Intl Conf on Scientific and Statistical DB Mgmt
CoopIS - Conference on Cooperative Information Systems
ER - Intl Conf on Conceptual Modeling (ER)

Rank 3:


COMAD: Intl Conf on Management of Data
BNCOD: British National Conference on Databases
ADC: Australasian Database Conference
ADBIS: Symposium on Advances in DB and Information Systems
DaWaK - Data Warehousing and Knowledge Discovery
RIDE Workshop
IFIP-DS: IFIP-DS Conference
IFIP-DBSEC - IFIP Workshop on Database Security
NGDB: Intl Symp on Next Generation DB Systems and Apps
ADTI: Intl Symp on Advanced DB Technologies and Integration
FEWFDB: Far East Workshop on Future DB Systems
MDM - Int. Conf. on Mobile Data Access/Management (MDA/MDM)
VDB - Visual Database Systems
IDEAS - International Database Engineering and Application Symposium

Others:


ARTDB - Active and Real-Time Database Systems
CODAS: Intl Symp on Cooperative DB Systems for Adv Apps
DBPL - Workshop on Database Programming Languages
EFIS/EFDBS - Engineering Federated Information (Database) Systems
KRDB - Knowledge Representation Meets Databases
NDB - National Database Conference (China)
NLDB - Applications of Natural Language to Data Bases
FQAS - Flexible Query-Answering Systems
IDC(W) - International Database Conference (HK CS)
RTDB - Workshop on Real-Time Databases
SBBD: Brazilian Symposium on Databases
WebDB - International Workshop on the Web and Databases
WAIM: Interational Conference on Web Age Information Management
DASWIS - Data Semantics in Web Information Systems
DMDW - Design and Management of Data Warehouses
DOLAP - International Workshop on Data Warehousing and OLAP
DMKD - Workshop on Research Issues in Data Mining and Knowledge Discovery
KDEX - Knowledge and Data Engineering Exchange Workshop
NRDM - Workshop on Network-Related Data Management
MobiDE - Workshop on Data Engineering for Wireless and Mobile Access
MDDS - Mobility in Databases and Distributed Systems
MEWS - Mining for Enhanced Web Search
TAKMA - Theory and Applications of Knowledge MAnagement
WIDM: International Workshop on Web Information and Data Management
W2GIS - International Workshop on Web and Wireless Geographical Information Systems
CDB - Constraint Databases and Applications
DTVE - Workshop on Database Technology for Virtual Enterprises
IWDOM - International Workshop on Distributed Object Management
OODBS - Workshop on Object-Oriented Database Systems
PDIS: Parallel and Distributed Information Systems

AREA: Artificial Intelligence and Related Subjects


Rank 1:

AAAI: American Association for AI National Conference
CVPR: IEEE Conf on Comp Vision and Pattern Recognition
IJCAI: Intl Joint Conf on AI
ICCV: Intl Conf on Computer Vision
ICML: Intl Conf on Machine Learning
KDD: Knowledge Discovery and Data Mining
KR: Intl Conf on Principles of KR & Reasoning
NIPS: Neural Information Processing Systems
UAI: Conference on Uncertainty in AI
AAMAS: Intl Conf on Autonomous Agents and Multi-Agent Systems (past: ICAA)
ACL: Annual Meeting of the ACL (Association of Computational Linguistics)

Rank 2:


NAACL: North American Chapter of the ACL
AID: Intl Conf on AI in Design
AI-ED: World Conference on AI in Education
CAIP: Inttl Conf on Comp. Analysis of Images and Patterns
CSSAC: Cognitive Science Society Annual Conference
ECCV: European Conference on Computer Vision
EAI: European Conf on AI
EML: European Conf on Machine Learning
GECCO: Genetic and Evolutionary Computation Conference (used to be GP)
IAAI: Innovative Applications in AI
ICIP: Intl Conf on Image Processing
ICNN/IJCNN: Intl (Joint) Conference on Neural Networks
ICPR: Intl Conf on Pattern Recognition
ICDAR: International Conference on Document Analysis and Recognition
ICTAI: IEEE conference on Tools with AI
AMAI: Artificial Intelligence and Maths
DAS: International Workshop on Document Analysis Systems
WACV: IEEE Workshop on Apps of Computer Vision
COLING: International Conference on Computational Liguistics
EMNLP: Empirical Methods in Natural Language Processing
EACL: Annual Meeting of European Association Computational Lingustics
CoNLL: Conference on Natural Language Learning
DocEng: ACM Symposium on Document Engineering
IEEE/WIC International Joint Conf on Web Intelligence and Intelligent Agent Technology
ICDM - IEEE International Conference on Data Mining


Rank 3:


PRICAI: Pacific Rim Intl Conf on AI
AAI: Australian National Conf on AI
ACCV: Asian Conference on Computer Vision
AI*IA: Congress of the Italian Assoc for AI
ANNIE: Artificial Neural Networks in Engineering
ANZIIS: Australian/NZ Conf on Intelligent Inf. Systems
CAIA: Conf on AI for Applications
CAAI: Canadian Artificial Intelligence Conference
ASADM: Chicago ASA Data Mining Conf: A Hard Look at DM
EPIA: Portuguese Conference on Artificial Intelligence
FCKAML: French Conf on Know. Acquisition & Machine Learning
ICANN: International Conf on Artificial Neural Networks
ICCB: International Conference on Case-Based Reasoning
ICGA: International Conference on Genetic Algorithms
ICONIP: Intl Conf on Neural Information Processing
IEA/AIE: Intl Conf on Ind. & Eng. Apps of AI & Expert Sys
ICMS: International Conference on Multiagent Systems
ICPS: International conference on Planning Systems
IWANN: Intl Work-Conf on Art & Natural Neural Networks
PACES: Pacific Asian Conference on Expert Systems
SCAI: Scandinavian Conference on Artifical Intelligence
SPICIS: Singapore Intl Conf on Intelligent System
PAKDD: Pacific-Asia Conf on Know. Discovery & Data Mining
SMC: IEEE Intl Conf on Systems, Man and Cybernetics
PAKDDM: Practical App of Knowledge Discovery & Data Mining
WCNN: The World Congress on Neural Networks
WCES: World Congress on Expert Systems
ASC: Intl Conf on AI and Soft Computing
PACLIC: Pacific Asia Conference on Language, Information and Computation
ICCC: International Conference on Chinese Computing
ICADL: International Conference on Asian Digital Libraries
RANLP: Recent Advances in Natural Language Processing
NLPRS: Natural Language Pacific Rim Symposium
Meta-Heuristics International Conference


Rank 3:

ICRA: IEEE Intl Conf on Robotics and Automation
NNSP: Neural Networks for Signal Processing
ICASSP: IEEE Intl Conf on Acoustics, Speech and SP
GCCCE: Global Chinese Conference on Computers in Education
ICAI: Intl Conf on Artificial Intelligence
AEN: IASTED Intl Conf on AI, Exp Sys & Neural Networks
WMSCI: World Multiconfs on Sys, Cybernetics & Informatics
LREC: Language Resources and Evaluation Conference
AIMSA: Artificial Intelligence: Methodology, Systems, Applications
AISC: Artificial Intelligence and Symbolic Computation
CIA: Cooperative Information Agents
International Conference on Computational Intelligence for Modelling, Control and Automation
Pattern Matching
ECAL: European Conference on Artificial Life
EKAW: Knowledge Acquisition, Modeling and Management
EMMCVPR: Energy Minimization Methods in Computer Vision and Pattern Recognition
EuroGP: European Conference on Genetic Programming
FoIKS: Foundations of Information and Knowledge Systems
IAWTIC: International Conference on Intelligent Agents, Web Technologies and Internet Commerce
ICAIL: International Conference on Artificial Intelligence and Law
SMIS: International Syposium on Methodologies for Intelligent Systems
IS&N: Intelligence and Services in Networks
JELIA: Logics in Artificial Intelligence
KI: German Conference on Artificial Intelligence
KRDB: Knowledge Representation Meets Databases
MAAMAW: Modelling Autonomous Agents in a Multi-Agent World
NC: ICSC Symposium on Neural Computation
PKDD: Principles of Data Mining and Knowledge Discovery
SBIA: Brazilian Symposium on Artificial Intelligence
Scale-Space: Scale-Space Theories in Computer Vision
XPS: Knowledge-Based Systems
I2CS: Innovative Internet Computing Systems
TARK: Theoretical Aspects of Rationality and Knowledge Meeting
MKM: International Workshop on Mathematical Knowledge Management
ACIVS: International Conference on Advanced Concepts For Intelligent Vision Systems
ATAL: Agent Theories, Architectures, and Languages
LACL: International Conference on Logical Aspects of Computational Linguistics

AREA: Hardware and Architecture

Rank 1:


ASPLOS: Architectural Support for Prog Lang and OS
ISCA: ACM/IEEE Symp on Computer Architecture
ICCAD: Intl Conf on Computer-Aided Design
DAC: Design Automation Conf
MICRO: Intl Symp on Microarchitecture
HPCA: IEEE Symp on High-Perf Comp Architecture


Rank 2:

FCCM: IEEE Symposium on Field Programmable Custom Computing Machines
SUPER: ACM/IEEE Supercomputing Conference
ICS: Intl Conf on Supercomputing
ISSCC: IEEE Intl Solid-State Circuits Conf
HCS: Hot Chips Symp
VLSI: IEEE Symp VLSI Circuits
CODES+ISSS: Intl Conf on Hardware/Software Codesign & System Synthesis
DATE: IEEE/ACM Design, Automation & Test in Europe Conference
FPL: Field-Programmable Logic and Applications
CASES: International Conference on Compilers, Architecture, and Synthesis for Embedded Systems


Rank 3:


ICA3PP: Algs and Archs for Parall Proc
EuroMICRO: New Frontiers of Information Technology
ACS: Australian Supercomputing Conf
ISC: Information Security Conference


Unranked:

Advanced Research in VLSI
International Symposium on System Synthesis
International Symposium on Computer Design
International Symposium on Circuits and Systems
Asia Pacific Design Automation Conference
International Symposium on Physical Design
International Conference on VLSI Design
CANPC: Communication, Architecture, and Applications for Network-Based Parallel Computing
CHARME: Conference on Correct Hardware Design and Verification Methods
CHES: Cryptographic Hardware and Embedded Systems
NDSS: Network and Distributed System Security Symposium
NOSA: Nordic Symposium on Software Architecture
ACAC: Australasian Computer Architecture Conference
CSCC: WSES/IEEE world multiconference on Circuits, Systems, Communications & Computers
ICN: IEEE International Conference on Networking Topology in Computer Science Conference

AREA: Applications and Media


Rank 1:


I3DG: ACM-SIGRAPH Interactive 3D Graphics
SIGGRAPH: ACM SIGGRAPH Conference
ACM-MM: ACM Multimedia Conference
DCC: Data Compression Conf
SIGMETRICS: ACM Conf on Meas. & Modelling of Comp Sys
SIGIR: ACM SIGIR Conf on Information Retrieval
PECCS: IFIP Intl Conf on Perf Eval of Comp \& Comm Sys
WWW: World-Wide Web Conference


Rank 2:


IEEE Visualization
EUROGRAPH: European Graphics Conference
CGI: Computer Graphics International
CANIM: Computer Animation
PG: Pacific Graphics
ICME: Intl Conf on MMedia & Expo
NOSSDAV: Network and OS Support for Digital A/V
PADS: ACM/IEEE/SCS Workshop on Parallel \& Dist Simulation
WSC: Winter Simulation Conference
ASS: IEEE Annual Simulation Symposium
MASCOTS: Symp Model Analysis \& Sim of Comp \& Telecom Sys
PT: Perf Tools - Intl Conf on Model Tech \& Tools for CPE
NetStore: Network Storage Symposium
MMCN: ACM/SPIE Multimedia Computing and Networking
JCDL: Joint Conference on Digital Libraries


Rank 3:


ACM-HPC: ACM Hypertext Conf
MMM: Multimedia Modelling
DSS: Distributed Simulation Symposium
SCSC: Summer Computer Simulation Conference
WCSS: World Congress on Systems Simulation
ESS: European Simulation Symposium
ESM: European Simulation Multiconference
HPCN: High-Performance Computing and Networking
Geometry Modeling and Processing
WISE
DS-RT: Distributed Simulation and Real-time Applications
IEEE Intl Wshop on Dist Int Simul and Real-Time Applications
ECIR: European Colloquium on Information Retrieval
Ed-Media
IMSA: Intl Conf on Internet and MMedia Sys

Un-ranked:


DVAT: IS\&T/SPIE Conf on Dig Video Compression Alg \& Tech
MME: IEEE Intl Conf. on Multimedia in Education
ICMSO: Intl Conf on Modelling, Simulation and Optimisation
ICMS: IASTED Intl Conf on Modelling and Simulation
COTIM: Conference on Telecommunications and Information Markets
DOA: International Symposium on Distributed Objects and Applications
ECMAST: European Conference on Multimedia Applications, Services and Techniques
GIS: Workshop on Advances in Geographic Information Systems
IDA: Intelligent Data Analysis
IDMS: Interactive Distributed Multimedia Systems and Telecommunication Services
IUI: Intelligent User Interfaces
MIS: Workshop on Multimedia Information Systems
WECWIS: Workshop on Advanced Issues of E-Commerce and Web/based Information Systems
WIDM: Web Information and Data Management
WOWMOM: Workshop on Wireless Mobile Multimedia
WSCG: International Conference in Central Europe on Computer Graphics and Visualization
LDTA: Workshop on Language Descriptions, Tools and Applications
IPDPSWPIM: International Workshop on Parallel and Distributed Computing Issues in Wireless Networks and Mobile Computing
IWST: International Workshop on Scheduling and Telecommunications
APDCM: Workshop on Advances in Parallel and Distributed Computational Models
CIMA: International ICSC Congress on Computational Intelligence: Methods and Applications
FLA: Fuzzy Logic and Applications Meeting
ICACSD: International Conference on Application of Concurrency to System Design
ICATPN: International conference on application and theory of Petri nets
AICCSA: ACS International Conference on Computer Systems and Applications
CAGD: International Symposium of Computer Aided Geometric Design
Spanish Symposium on Pattern Recognition and Image Analysis
International Workshop on Cluster Infrastructure for Web Server and E-Commerce Applications
WSES ISA: Information Science And Applications Conference
CHT: International Symposium on Advances in Computational Heat Transfer
IMACS: International Conference on Applications of Computer Algebra
VIPromCom: International Symposium on Video Processing and Multimedia Communications
PDMPR: International Workshop on Parallel and Distributed Multimedia Processing & Retrieval
International Symposium On Computational And Applied Pdes
PDCAT: International Conference on Parallel and Distributed Computing, Applications, and Techniques
Biennial Computational Techniques and Applications Conference
Symposium on Advanced Computing in Financial Markets
WCCE: World Conference on Computers in Education
ITCOM: SPIE's International Symposium on The Convergence of Information Technologies and Communications
Conference on Commercial Applications for High-Performance Computing
MSA: Metacomputing Systems and Applications Workshop
WPMC : International Symposium on Wireless Personal Multimedia Communications
WSC: Online World Conference on Soft Computing in Industrial Applications
HERCMA: Hellenic European Research on Computer Mathematics and its Applications
PARA: Workshop on Applied Parallel Computing
International Computer Science Conference: Active Media Technology
IW-MMDBMS - Int. Workshop on Multi-Media Data Base Management Systems

AREA: System Technology


Rank 1:


SIGCOMM: ACM Conf on Comm Architectures, Protocols & Apps
INFOCOM: Annual Joint Conf IEEE Comp & Comm Soc
SPAA: Symp on Parallel Algms and Architecture
PODC: ACM Symp on Principles of Distributed Computing
PPoPP: Principles and Practice of Parallel Programming
RTSS: Real Time Systems Symp
SOSP: ACM SIGOPS Symp on OS Principles
SOSDI: Usenix Symp on OS Design and Implementation
CCS: ACM Conf on Comp and Communications Security
IEEE Symposium on Security and Privacy
MOBICOM: ACM Intl Conf on Mobile Computing and Networking
USENIX Conf on Internet Tech and Sys
ICNP: Intl Conf on Network Protocols
PACT: Intl Conf on Parallel Arch and Compil Tech
RTAS: IEEE Real-Time and Embedded Technology and Applications Symposium
ICDCS: IEEE Intl Conf on Distributed Comp Systems

Rank 2:


CC: Compiler Construction
IPDPS: Intl Parallel and Dist Processing Symp
IC3N: Intl Conf on Comp Comm and Networks
ICPP: Intl Conf on Parallel Processing
SRDS: Symp on Reliable Distributed Systems
MPPOI: Massively Par Proc Using Opt Interconns
ASAP: Intl Conf on Apps for Specific Array Processors
Euro-Par: European Conf. on Parallel Computing
Fast Software Encryption
Usenix Security Symposium
European Symposium on Research in Computer Security
WCW: Web Caching Workshop
LCN: IEEE Annual Conference on Local Computer Networks
IPCCC: IEEE Intl Phoenix Conf on Comp & Communications
CCC: Cluster Computing Conference
ICC: Intl Conf on Comm
WCNC: IEEE Wireless Communications and Networking Conference
CSFW: IEEE Computer Security Foundations Workshop


Rank 3:


MPCS: Intl. Conf. on Massively Parallel Computing Systems
GLOBECOM: Global Comm
ICCC: Intl Conf on Comp Communication
NOMS: IEEE Network Operations and Management Symp
CONPAR: Intl Conf on Vector and Parallel Processing
VAPP: Vector and Parallel Processing
ICPADS: Intl Conf. on Parallel and Distributed Systems
Public Key Cryptosystems
Annual Workshop on Selected Areas in Cryptography
Australasia Conference on Information Security and Privacy
Int. Conf on Inofrm and Comm. Security
Financial Cryptography
Workshop on Information Hiding
Smart Card Research and Advanced Application Conference
ICON: Intl Conf on Networks
NCC: Nat Conf Comm
IN: IEEE Intell Network Workshop
Softcomm: Conf on Software in Tcomms and Comp Networks
INET: Internet Society Conf
Workshop on Security and Privacy in E-commerce


Un-ranked:


PARCO: Parallel Computing
SE: Intl Conf on Systems Engineering (**)
PDSECA: workshop on Parallel and Distributed Scientific and Engineering Computing with Applications
CACS: Computer Audit, Control and Security Conference
SREIS: Symposium on Requirements Engineering for Information Security
SAFECOMP: International Conference on Computer Safety, Reliability and Security
IREJVM: Workshop on Intermediate Representation Engineering for the Java Virtual Machine
EC: ACM Conference on Electronic Commerce
EWSPT: European Workshop on Software Process Technology
HotOS: Workshop on Hot Topics in Operating Systems
HPTS: High Performance Transaction Systems
Hybrid Systems
ICEIS: International Conference on Enterprise Information Systems
IOPADS: I/O in Parallel and Distributed Systems
IRREGULAR: Workshop on Parallel Algorithms for Irregularly Structured Problems
KiVS: Kommunikation in Verteilten Systemen
LCR: Languages, Compilers, and Run-Time Systems for Scalable Computers
MCS: Multiple Classifier Systems
MSS: Symposium on Mass Storage Systems
NGITS: Next Generation Information Technologies and Systems
OOIS: Object Oriented Information Systems
SCM: System Configuration Management
Security Protocols Workshop
SIGOPS European Workshop
SPDP: Symposium on Parallel and Distributed Processing
TreDS: Trends in Distributed Systems
USENIX Technical Conference
VISUAL: Visual Information and Information Systems
FoDS: Foundations of Distributed Systems: Design and Verification of Protocols conference
RV: Post-CAV Workshop on Runtime Verification
ICAIS: International ICSC-NAISO Congress on Autonomous Intelligent Systems
ITiCSE: Conference on Integrating Technology into Computer Science Education
CSCS: CyberSystems and Computer Science Conference
AUIC: Australasian User Interface Conference
ITI: Meeting of Researchers in Computer Science, Information Systems Research & Statistics
European Conference on Parallel Processing
RODLICS: Wses International Conference on Robotics, Distance Learning & Intelligent Communication Systems
International Conference On Multimedia, Internet & Video Technologies
PaCT: Parallel Computing Technologies workshop
PPAM: International Conference on Parallel Processing and Applied Mathematics
International Conference On Information Networks, Systems And Technologies
AmiRE: Conference on Autonomous Minirobots for Research and Edutainment
DSN: The International Conference on Dependable Systems and Networks
IHW: Information Hiding Workshop
GTVMT: International Workshop on Graph Transformation and Visual Modeling Techniques

AREA: Programming Languages and Software Engineering


Rank 1:


POPL: ACM-SIGACT Symp on Principles of Prog Langs
PLDI: ACM-SIGPLAN Symp on Prog Lang Design & Impl
OOPSLA: OO Prog Systems, Langs and Applications
ICFP: Intl Conf on Function Programming
JICSLP/ICLP/ILPS: (Joint) Intl Conf/Symp on Logic Prog
ICSE: Intl Conf on Software Engineering
FSE: ACM Conf on the Foundations of Software Engineering (inc: ESEC-FSE)
FM/FME: Formal Methods, World Congress/Europe
CAV: Computer Aided Verification

Rank 2:


CP: Intl Conf on Principles & Practice of Constraint Prog
TACAS: Tools and Algos for the Const and An of Systems
ESOP: European Conf on Programming
ICCL: IEEE Intl Conf on Computer Languages
PEPM: Symp on Partial Evalutation and Prog Manipulation
SAS: Static Analysis Symposium
RTA: Rewriting Techniques and Applications
IWSSD: Intl Workshop on S/W Spec & Design
CAiSE: Intl Conf on Advanced Info System Engineering
SSR: ACM SIGSOFT Working Conf on Software Reusability
SEKE: Intl Conf on S/E and Knowledge Engineering
ICSR: IEEE Intl Conf on Software Reuse
ASE: Automated Software Engineering Conference
PADL: Practical Aspects of Declarative Languages
ISRE: Requirements Engineering
ICECCS: IEEE Intl Conf on Eng. of Complex Computer Systems
IEEE Intl Conf on Formal Engineering Methods
Intl Conf on Integrated Formal Methods
FOSSACS: Foundations of Software Science and Comp Struct
APLAS: Asian Symposium on Programming Languages and Systems
MPC: Mathematics of Program Construction
ECOOP: European Conference on Object-Oriented Programming
ICSM: Intl. Conf on Software Maintenance
HASKELL - Haskell Workshop


Rank 3:


FASE: Fund Appr to Soft Eng
APSEC: Asia-Pacific S/E Conf
PAP/PACT: Practical Aspects of PROLOG/Constraint Tech
ALP: Intl Conf on Algebraic and Logic Programming
PLILP: Prog, Lang Implentation & Logic Programming
LOPSTR: Intl Workshop on Logic Prog Synthesis & Transf
ICCC: Intl Conf on Compiler Construction
COMPSAC: Intl. Computer S/W and Applications Conf
TAPSOFT: Intl Joint Conf on Theory & Pract of S/W Dev
WCRE: SIGSOFT Working Conf on Reverse Engineering
AQSDT: Symp on Assessment of Quality S/W Dev Tools
IFIP Intl Conf on Open Distributed Processing
Intl Conf of Z Users
IFIP Joint Int'l Conference on Formal Description Techniques and Protocol Specification, Testing, And Verification
PSI (Ershov conference)
UML: International Conference on the Unified Modeling Language

Un-ranked:


Australian Software Engineering Conference
IEEE Int. W'shop on Object-oriented Real-time Dependable Sys.
(WORDS)
IEEE International Symposium on High Assurance Systems Engineering
The Northern Formal Methods Workshops
Formal Methods Pacific
Int. Workshop on Formal Methods for Industrial Critical Systems
JFPLC - International French Speaking Conference on Logic and Constraint Programming
L&L - Workshop on Logic and Learning
SFP - Scottish Functional Programming Workshop
LCCS - International Workshop on Logic and Complexity in Computer Science
VLFM - Visual Languages and Formal Methods
NASA LaRC Formal Methods Workshop
PASTE: Workshop on Program Analysis For Software Tools and Engineering
TLCA: Typed Lambda Calculus and Applications
FATES - A Satellite workshop on Formal Approaches to Testing of Software
Workshop On Java For High-Performance Computing
DSLSE - Domain-Specific Languages for Software Engineering
FTJP - Workshop on Formal Techniques for Java Programs
WFLP - International Workshop on Functional and (Constraint) Logic Programming
FOOL - International Workshop on Foundations of Object-Oriented Languages
SREIS - Symposium on Requirements Engineering for Information Security
HLPP - International workshop on High-level parallel programming and applications
INAP - International Conference on Applications of Prolog
MPOOL - Workshop on Multiparadigm Programming with OO Languages
PADO - Symposium on Programs as Data Objects
TOOLS: Int'l Conf Technology of Object-Oriented Languages and Systems
Australasian Conference on Parallel And Real-Time Systems
PASTE: Workshop on Program Analysis For Software Tools and Engineering
AvoCS: Workshop on Automated Verification of Critical Systems
SPIN: Workshop on Model Checking of Software
FemSys: Workshop on Formal Design of Safety Critical Embedded Systems
Ada-Europe
PPDP: Principles and Practice of Declarative Programming
APL Conference
ASM: Workshops on Abstract State Machines
COORDINATION: Coordination Models and Languages
DocEng: ACM Symposium on Document Engineering
DSV-IS: Design, Specification, and Verification of Interactive Systems
FMCAD: Formal Methods in Computer-Aided Design
FMLDO: Workshop on Foundations of Models and Languages for Data and Objects
IFL: Implementation of Functional Languages
ILP: International Workshop on Inductive Logic Programming
ISSTA: International Symposium on Software Testing and Analysis
ITC: International Test Conference
IWFM: Irish Workshop in Formal Methods
Java Grande
LP: Logic Programming: Japanese Conference
LPAR: Logic Programming and Automated Reasoning
LPE: Workshop on Logic Programming Environments
LPNMR: Logic Programming and Non-monotonic Reasoning
PJW: Workshop on Persistence and Java
RCLP: Russian Conference on Logic Programming
STEP: Software Technology and Engineering Practice
TestCom: IFIP International Conference on Testing of Communicating Systems
VL: Visual Languages
FMPPTA: Workshop on Formal Methods for Parallel Programming Theory and Applications
WRS: International Workshop on Reduction Strategies in Rewriting and Programming
FATES: A Satellite workshop on Formal Approaches to Testing of Software
FORMALWARE: Meeting on Formalware Engineering: Formal Methods for Engineering Software
DRE: conference Data Reverse Engineering
STAREAST: Software Testing Analysis & Review Conference
Conference on Applied Mathematics and Scientific Computing
International Testing Computer Software Conference
Linux Showcase & Conference
FLOPS: International Symposum on Functional and Logic Programming
GCSE: International Conference on Generative and Component-Based Software Engineering
JOSES: Java Optimization Strategies for Embedded Systems
AADEBUG: Automated and Algorithmic Debugging
AMAST: Algebraic Methodology and Software Technology

AREA: Algorithms and Theory



Rank 1:


STOC: ACM Symp on Theory of Computing
FOCS: IEEE Symp on Foundations of Computer Science
COLT: Computational Learning Theory
LICS: IEEE Symp on Logic in Computer Science
SCG: ACM Symp on Computational Geometry
SODA: ACM/SIAM Symp on Discrete Algorithms
SPAA: ACM Symp on Parallel Algorithms and Architectures
ISSAC: Intl. Symp on Symbolic and Algebraic Computation
CRYPTO: Advances in Cryptology


Rank 2:


EUROCRYPT: European Conf on Cryptography
CONCUR: International Conference on Concurrency Theory
ICALP: Intl Colloquium on Automata, Languages and Prog
STACS: Symp on Theoretical Aspects of Computer Science
CC: IEEE Symp on Computational Complexity
WADS: Workshop on Algorithms and Data Structures
MFCS: Mathematical Foundations of Computer Science
SWAT: Scandinavian Workshop on Algorithm Theory
ESA: European Symp on Algorithms
IPCO: MPS Conf on integer programming & comb optimization
LFCS: Logical Foundations of Computer Science
ALT: Algorithmic Learning Theory
EUROCOLT: European Conf on Learning Theory
DSIC: Int'l Symp om Distributed Computing (formally WDAG: Workshop on Distributed Algorithms)
ISTCS: Israel Symp on Theory of Computing and Systems
ISAAC: Intl Symp on Algorithms and Computation
FST&TCS: Foundations of S/W Tech & Theoretical CS
LATIN: Intl Symp on Latin American Theoretical Informatics
CADE: Conf on Automated Deduction
IEEEIT: IEEE Symposium on Information Theory
Asiacrypt


Rank 3:


MEGA: Methods Effectives en Geometrie Algebrique
ASIAN: Asian Computing Science Conf
CCCG: Canadian Conf on Computational Geometry
FCT: Fundamentals of Computation Theory
WG: Workshop on Graph Theory
CIAC: Italian Conf on Algorithms and Complexity
ICCI: Advances in Computing and Information
AWTI: Argentine Workshop on Theoretical Informatics
CATS: The Australian Theory Symp
COCOON: Annual Intl Computing and Combinatorics Conf
UMC: Unconventional Models of Computation
MCU: Universal Machines and Computations
GD: Graph Drawing
SIROCCO: Structural Info & Communication Complexity
ALEX: Algorithms and Experiments
ALG: ENGG Workshop on Algorithm Engineering
LPMA: Intl Workshop on Logic Programming and Multi-Agents
EWLR: European Workshop on Learning Robots
CITB: Complexity & info-theoretic approaches to biology
FTP: Intl Workshop on First-Order Theorem Proving (FTP)
CSL: Annual Conf on Computer Science Logic (CSL)
AAAAECC: Conf On Applied Algebra, Algebraic Algms & ECC
DMTCS: Intl Conf on Disc Math and TCS
JCDCG: Japan Conference on Discrete and Computational Geometry

Un-ranked:

Information Theory Workshop
CL: International Conference on Computational Logic
COSIT: Spatial Information Theory
ETAPS: European joint conference on Theory And Practice of Software
ICCS: International Conference on Conceptual Structures
ICISC: Information Security and Cryptology
PPSN: Parallel Problem Solving from Nature
SOFSEM: Conference on Current Trends in Theory and Practice of Informatics
TPHOLs: Theorem Proving in Higher Order Logics
WADT: Workshop on Algebraic Development Techniques
TERM: THEMATIC TERM: Semigroups, Algorithms, Automata and Languages
IMGTA: Italian Meeting on Game Theory and Applications
DLT: Developments in Language Theory
International Workshop on Discrete Algorithms and Methods for Mobile Computing and Communications
APPROX: International Workshop on Approximation Algorithms for Combinatorial Optimization Problems
WAE: Workshop on Algorithm Engineering
CMFT: Computational Methods and Function Theory
AWOCA: Australasian Workshop on Combinatorial Algorithms
Fun with Algorithms Meeting
ICTCS: Italian Conference on Theoretical Computer Science
ComMaC: International Conference On Computational Mathematics
TLCA: Typed Lambda Calculus and Applications
DCAGRS: Workshop on Descriptional Complexity of Automata, Grammars and Related Structures

AREA: Biomedical


Rank 1:

RECOMB: Annual Intl Conf on Comp Molecular Biology

ISMB: International Conference on Intelligent Systems for Molecular Biology

Rank 2:

AMIA: American Medical Informatics Annual Fall Symposium
DNA: Meeting on DNA Based Computers
WABI: Workshop on Algorithms in Bioinformatics

Rank 3:

MEDINFO: World Congress on Medical Informatics
International Conference on Sequences and their Applications
ECAIM: European Conf on AI in Medicine
APAMI: Asia Pacific Assoc for Medical Informatics Conf
INBS: IEEE Intl Symp on Intell. in Neural & Bio Systems

Un-ranked:

MCBC: Wses conf on Mathematics And Computers In Biology And Chemistry
KDDMBD - Knowledge Discovery and Data Mining in Biological Databases Meeting

AREA: Miscellaneous


Rank 1:

Rank 2:

CSCW: Conference on Computer Supported Cooperative Work (*)

Rank 3:


SAC: ACM/SIGAPP Symposium on Applied Computing
ICSC: Internal Computer Science Conference
ISCIS: Intl Symp on Computer and Information Sciences
ICSC2: International Computer Symposium Conference
ICCE: Intl Conf on Comps in Edu
WCC: World Computing Congress
PATAT: Practice and Theory of Automated Timetabling

Unranked:


ICCI: International Conference on Cognitive Informatics
APISIT: Asia Pacific International Symposium on Information Technology
CW: The International Conference on Cyberworlds
Workshop on Open Hypermedia Systems
Workshop on Middleware for Mobile Computing
International Working Conference on Distributed Applications and Interoperable Systems
ADL: Advances in Digital Libraries
ADT: Specification of Abstract Data Type Workshops
AVI: Working Conference on Advanced Visual Interfaces
DL: Digital Libraries
DLog: Description Logics
ECDL: European Conference on Digital Libraries
EDCC: European Dependable Computing Conference
FroCos: Frontiers of Combining Systems
FTCS: Symposium on Fault-Tolerant Computing
IFIP World Computer Congress
INTEROP: Interoperating Geographic Information Systems
IO: Information Outlook
IQ: MIT Conference on Information Quality
IUC: International Unicode Conference
IWMM: International Workshop on Memory Management
MD: IEEE Meta-Data Conference
Middleware
MLDM: Machine Learning and Data Mining in Pattern Recognition
POS: Workshop on Persistent Object Systems
SCCC: International Conference of the Chilean Computer Science Society
SPIRE: String Processing and Information Retrieval
TABLEAUX: Analytic Tableaux and Related Methods
TIME Workshops
TREC: Text REtrieval Conference
UIDIS: User Interfaces to Data Intensive Systems
VRML Conference
AFIPS: American Federation of Information Processing Societies
ACSC: Australasian Computer Science Conference
CMCS: Coalgebraic Methods in Computer Science
BCTCS: British Colloquium for Theoretical Computer Science
IJCAR: The International Joint Conference on Automated Reasoning
STRATEGIES: International Workshop on Strategies in Automated Deduction
UNIF: International Workshop on Unification
SOCO: Meeting on Soft Computing
ConCoord: International Workshop on Concurrency and Coordination
CIAA: International Conference on Implementation and Application of Automata
Workshop on Information Stucture, Discourse Structure and Discourse Semantics
RANDOM: International Workshop on Randomization and Approximation Techniques in Computer Science
WMC: Workshop on Membrane Computing
FI-CS: Fixed Points in Computer Science
DC Computer Science Conference
Workshop on Novel Approaches to Hard Discrete Optimization
NALAC: Numerical Analysis, Linear Algebra And Computations Conference
ICLSSC: International Conference on Large-Scale Scientific Computations
ISACA : Information Systems Audit and Control Association International Conference
ICOSAHOM: International Conference On Spectral And High Order Methods
AIP: International Conference on Applied Inverse Problems: Theoretical and Computational Aspects
ECCM: European Conference On Computational Mechanics
Scicade: Scientific Computing and Differential Equation
BMVC: British Machine Vision Conference
COMEP: Euroconference On Computational Mechanics And Engineering Practis
JCIS: Joint Conference on Information Sciences
CHP: Compilers for High Performance conference
SIAM Conference on Geometric Design and Computing