IT Certificaions

Google Cloud Platform

Monday, January 31, 2011

Cloneable Interface::::Nucleous Software Java Interview Questions: Qu8

clone() and the Cloneable Interface in Java

------------------------------------------------------------------------------- 

The clone( ) method generates a duplicate copy of the object on which it is called. Only classes that implement the Cloneable interface can be cloned.

The Cloneable interface defines no members. It is used to indicate that a class allows a bitwise copy of an object (that is, a clone) to be made.

If you try to call clone( ) on a class that does not implement Cloneable, a CloneNotSupportedException is thrown. When a clone is made, the constructor for the object being cloned is not called. A clone is simply an exact copy of the original.

Cloning is a potentially dangerous action, because it can cause unintended side effects. 

For example, if the object being cloned contains a reference variable called obRef, then when the clone is made, obRef in the clone will refer to the same object as does obRef in the original. If the clone makes a change to the contents of the object referred to by obRef, then it will be changed for the original object, too. 

Here is another example. If an object opens an I/O stream and is then cloned, two objects will be capable of operating on the same stream. Further, if one of these objects closes the stream, the other object might still attempt to write to it, causing an error.
Because cloning can cause problems, clone( ) is declared as protected inside Object. This means that it must either be called from within a method defined by the class that implements Cloneable, or it must be explicitly overridden by that class so that it is public. Let's look at an example of each approach.
The following program implements Cloneable and defines the method cloneTest( ), which calls clone( ) in Object:
// Demonstrate the clone() method.
class TestClone implements Cloneable {
int a;
double b;
// This method calls Object's clone().
TestClone cloneTest() {
try {
// call clone in Object.
return (TestClone) super.clone();
} catch(CloneNotSupportedException e) {
System.out.println("Cloning not allowed.");
return this;
}
}
}
class CloneDemo {
public static void main(String args[]) {
TestClone x1 = new TestClone();
TestClone x2;
x1.a = 10;
x1.b = 20.98;
x2 = x1.cloneTest(); // clone x1
System.out.println("x1: " + x1.a + " " + x1.b);
System.out.println("x2: " + x2.a + " " + x2.b);
}
}
Here, the method cloneTest( ) calls clone( ) in Object and returns the result. Notice that the object returned by clone( ) must be cast into its appropriate type (TestClone). The following example overrides clone( ) so that it can be called from code outside of its class. To do this, its access specifier must be public, as shown here:
// Override the clone() method.
class TestClone implements Cloneable {
int a;
double b;
// clone() is now overridden and is public.
public Object clone() {
try {
// call clone in Object.
return super.clone();
} catch(CloneNotSupportedException e) {
System.out.println("Cloning not allowed.");
return this;
}
}
}

class CloneDemo2 {
public static void main(String args[]) {
TestClone x1 = new TestClone();
TestClone x2;
x1.a = 10;
x1.b = 20.98;
// here, clone() is called directly.
x2 = (TestClone) x1.clone();
System.out.println("x1: " + x1.a + " " + x1.b);
System.out.println("x2: " + x2.a + " " + x2.b);
}
}

The side effects caused by cloning are sometimes difficult to see at first. It is easy to think that a class is safe for cloning when it actually is not. In general, you should not implement Cloneable for any class without good reason.

 

wait | notify | notifyAll |Nucleous Software Java Interview Questions: Qu6

wait() | notify() | notifyAll()

Multithreading replaces event loop programming by dividing your tasks into discrete and logical units. Threads also provide a secondary benefit: they do away with polling. Polling is usually implemented by a loop that is used to check some condition repeatedly. Once the condition is true, appropriate action is taken. This wastes CPU time. For example, consider the classic queuing problem, where one thread is producing some data and another is consuming it. To make the problem more interesting, suppose that the producer has to wait until the consumer is finished before it generates more data. In a polling system, the consumer would waste many CPU cycles while it
waited for the producer to produce. Once the producer was finished, it would start polling, wasting more CPU cycles waiting for the consumer to finish, and so on. Clearly, this situation is undesirable.
To avoid polling, Java includes an elegant interrocess communication mechanism via the wait( ), notify( ), and notifyAll( ) methods. These methods are implemented as final methods in Object, so all classes have them. All three methods can be called only from within a synchronized method. Although conceptually advanced from a computer science perspective, the rules for using these methods are actually quite simple:
  • wait( ) tells the calling thread to give up the monitor and go to sleep until some other
    thread enters the same monitor and calls notify( ).
  • notify( ) wakes up the first thread that called wait( ) on the same object.
  • notifyAll( ) wakes up all the threads that called wait( ) on the same object. The
    highest priority thread will run first.
These methods are declared within Object, as shown here:
final void wait( ) throws InterruptedException
final void notify( )
final void notifyAll( )
--------------------------------------------------------------------------------------
Here you can see that these method are final which means they can not be overiride....

IBM Interview Questions: Serialization Vs Externalizable

Serialization Vs Externalizable

One obvious difference that Serializable is a marker interface and doesn't contain any methods whereas Externalizable interface contains two methods: writeExternal(ObjectOutput) and readExternal(ObjectInput). But, the main difference between the two is that Externalizable interface provides complete control to the class implementing the interface over the object serialization process whereas Serializable interface normally uses default implementation to handle the object serialization process.

While implementing Serializable, you are not forced to define any method as it's a marker interface. However, you can use the writeObject or readObject methods to handle the serilaization process of complex objects. But, while implementing Externalizable interface, you are bound to define the two methods: writeExternal and readExternal and all the object serialization process is solely handled by these two methods only.

In case of Serializable interface implementation, state of Superclasses are automatically taken care by the default implementation whereas in case of Externalizable interface the implementing class needs to handle everything on its own as there is no default implementation in this case.

Example Scenario: when to use what?

If everything is automatically taken care by implementing the Serializable interface, why would anyone like to implement the Externalizable interface and bother to define the two methods? Simply to have the complete control on the process. OKay... let's take a sample example to understand this. Suppose we have an object having hundreds of fields (non-transient) and we want only few fields to be stored on the persistent storage and not all. One solution would be to declare all other fields (except those which we want to serialize) as transient and the default Serialization process will automatically take care of that. But, what if those few fields are not fixed at design tiime instead they are conditionally decided at runtime. In such a situation, implementing Externalizable interface will probably be a better solution. Similarly, there may be scenarios where we simply don't want to maintain the state of the Superclasses (which are automatically maintained by the Serializable interface implementation).

Which has better performance - Externalizable or Serializale?

In most of the cases (or in all if implemented correctly), Externalizable would be more efficient than Serializable for the simple reason that in case of Externalizable the entire process of marshalling, un-marshalling, writing to the stream, and reading back from stream, etc. is under your control i.e., you got to write the code and you can of course choose the best way depending upon the situaton you are in. In case of Serializable, this all (or at least most of it) is done implicitly and the internal implementation being generic to support any possible case, can ofcourse not be the most efficient. The other reason for Serializable to be less efficient is that in this case several reflective calls are made internally to get the metadata of the class. Of course, you would not need any such call is needed in case Externalizable.

However, the efficiency comes at a price. You lose flexibility because as soon as your class definition changes, you would probably need to modify your Externaliable implementation as well. Additionally, since you got to write more code in case Externalizable, you increase the chances of adding more bugs in your application.

Another disadvantage of Externalizable is that you got to have the class to interpret the stream as the stream format is an opaque binary data. Normal Serialization adds field names and types (this why reflective calls are needed here) into the stream, so it's possible to re-construct the object even without the availability of the object's class. But, you need to write the object reconstruction code yourself as Java Serialization doesn't provide any such API at the moment. The point is that in case of Serialzable you can at least write your code as the stream is enriched with field names and types whereas in case Externalizable the stream contains just the data and hence you can't unless you use the class definition. As you can see Serializable not only makes many reflective calls, but also puts the name/type info into the stream and this would of course take some time making Serialzable slower than the corresponding Externalizable process where you got to stuff only the data into the stream.

Serilization| markup Interface | Nucleous Software Java Interview Questions: Qu6

What is serilization? How we implement it....Also what is the meaning of markup interface????
------------------------------------------------------------------------------------------------
Answer1: It's a mechanism in Java to ensure that the objects of classes implementing java.io.Serializable interface will have the capability of storing their states on a persistent storage that can be loaded back into the memory with the same state whenever needed.

The Serializable interface is only a marker interface and has no methods or fields in it. It just serves the purpose of notifying the JVM about the Class implementing it that the Class may require to save its state on a persistent medium and subsequently may need to restore the same saved state in the memory when needed. The compiler handles it either by identifying serialVersionUID field in the class or by adding one to the class (the next post talks in detail about serialVersionUID) and presence of this field notifies the Runtime Environment to treat the instance creation appropriately.

The subclasses of a Serializable class are automatically Serializable, and if you want to Serialize sub classes of non-serialized classes then you need to ensure that the super class has a no-argument constructor. Reason being, on marking the sub class as a Serialized class, it tries to save and restore the state of public, protected, and package (of course only if accessible) fields of the super class also. The sub class can do this only if the super class has a no-argument constructor. Otherwise, you'll get a runtime exception.

While De-Serialization also the state of the public, protected, and package (only if accessible) fields of the non-serialized super classes are restored using the no-argument constructor of the super class. The state of the fields of the serialized sub-class is restored from the stream.

Custom handling of objects while Serialization/Deserialization

In addition to the default serialization or deserialization of objects, Java also supports special handling of the serialization (or deserialization) of objects. You just need to implement the following three special methods in that case and do whatever way you want the save/restore of the objects to go. These special methods are:-
  • private void writeObject(java.io.ObjectOutputStream out) throws IOExceptionprivate
  • private void readObject(java.io.ObjectInputStream in) throws IOException, ClassNotFoundExceptionprivate
  • private void readObjectNoData() throws ObjectStreamException

writeObject method
As the name suggests, this method is used for writing the state of the object to the output stream passed as its parameter. Usually the defaultWriteObject() method (or methods from the DataOutput interface for primitive types) of the ObjectOutputStream class is used to write non-static and non-transient fields of the current class to the output stream. The defaultWriteObject() method can be called from within a writeObject method only, otherwise it throws NotActiveException. Some I/O error while writing the date to the stream will cause the IOException to be thrown by this method.

readObject method
This method reads the data saved by the writeObject method and restores the state of the object. This method normally calls readDefaultObject() method (or methods from DataInput interface for primitive types) of the ObjectInputStream class to restore the non-static and non-transient fields of the object.

An interesting scenario: Suppose you have a class having 4 non-static and non-transient fields namely fld1, fld2, fld3, and fld4. Now you create an instance and save the state of the object on a persistent medium using writeObject method. Down the line the class evolves and you need to add two new non-static, non-transient fields namely fld5, fld6. What will happen, if you try to restore the previously saved state of the object with an object reference of the new version of the Class?

Well... nothing serious. Actually the readDefaultObject() method reads the data and the field name from the stream and assigns the corresponding named field of the current object. So, in our case fld1, fld2, fld3, and fld4 will get restored from the stream and the other two fields fld5 and fld6 will continue having default values.

readObjectNoData method
Suppose you have a class named Class1 and you have saved the state of an object of this class on a persistent medium. You send that saved state to some other application, which has a different version of the same class 'Class1'. Say the recipient application is having a version where Class1 extends another class named 'Class2'. Now, if you try to restore the saved state shipped to the recipient application, you need to use readObjectNoData method and not the readObject method. Reason being, the recipient application will try to look for the state of the superclass of Class1, which is Class2 and obviously it won't find that in the saved state. This may happen in that case also where the saved state of the object gets tempered. readObjectNoData method simply initializes the state of the superclass in any of the above two scenarios.
------------------------------------------------------------------------------------------------------------

Answer2: By Sun Microsystem Experts

We all know the Java platform allows us to create reusable objects in memory. However, all of those objects exist only as long as the Java virtual machine1 remains running. It would be nice if the objects we create could exist beyond the lifetime of the virtual machine, wouldn't it? Well, with object serialization, you can flatten your objects and reuse them in powerful ways.
Object serialization is the process of saving an object's state to a sequence of bytes, as well as the process of rebuilding those bytes into a live object at some future time. The Java Serialization API provides a standard mechanism for developers to handle object serialization. The API is small and easy to use, provided the classes and methods are understood.
Throughout this article, we'll examine how to persist your Java objects, starting with the basics and proceeding to the more advanced concepts. We'll learn three different ways to perform serialization -- using the default protocol, customizing the default protocol, and creating our own protocol -- and we'll investigate concerns that arise with any persistence scheme such as object caching, version control, and performance issues.
By the conclusion of this article, you should have a solid comprehension of that powerful yet sometimes poorly understood Java API.

First Things First: The Default Mechanism

Let's start with the basics. To persist an object in Java, we must have a persistent object. An object is marked serializable by implementing the java.io.Serializable interface, which signifies to the underlying API that the object can be flattened into bytes and subsequently inflated in the future.
Let's look at a persistent class we'll use to demonstrate the serialization mechanism:
10 import java.io.Serializable;
20 import java.util.Date;
30 import java.util.Calendar;
40 public class PersistentTime implements Serializable
50 {
60 private Date time;
70
80 public PersistentTime()
90 {
100      time = Calendar.getInstance().getTime();
110    }
120
130    public Date getTime()
140    {
150      return time;
160    }
170  }
As you can see, the only thing we had to do differently from creating a normal class is implement the java.io.Serializable interface on line 40. The completely empty Serializable is only a marker interface -- it simply allows the serialization mechanism to verify that the class is able to be persisted. Thus, we turn to the first rule of serialization:
Rule #1: The object to be persisted must implement the Serializable interface or inherit that implementation from its object hierarchy.
The next step is to actually persist the object. That is done with the java.io.ObjectOutputStream class. That class is a filter stream--it is wrapped around a lower-level byte stream (called a node stream) to handle the serialization protocol for us. Node streams can be used to write to file systems or even across sockets. That means we could easily transfer a flattened object across a network wire and have it be rebuilt on the other side!
Take a look at the code used to save the PersistentTime object:
10 import java.io.ObjectOutputStream;
20 import java.io.FileOutputStream;
30 import java.io.IOException;
40 public class FlattenTime
50 {
60 public static void main(String [] args)
70 {
80 String filename = "time.ser";
90 if(args.length > 0)
100     {
110       filename = args[0];
120     }
130     PersistentTime time = new PersistentTime();
140     FileOutputStream fos = null;
150     ObjectOutputStream out = null;
160     try
170     {
180       fos = new FileOutputStream(filename);
190       out = new ObjectOutputStream(fos);
200       out.writeObject(time);
210       out.close();
220     }
230     catch(IOException ex)
240     {
250       ex.printStackTrace();
260     }
270   }
280 }
The real work happens on line 200 when we call the ObjectOutputStream.writeObject() method, which kicks off the serialization mechanism and the object is flattened (in that case to a file).
To restore the file, we can employ the following code:
10 import java.io.ObjectInputStream;
20 import java.io.FileInputStream;
30 import java.io.IOException;
40 import java.util.Calendar;
50 public class InflateTime
60 {
70 public static void main(String [] args)
80 {
90 String filename = "time.ser";
100     if(args.length > 0)
110     {
120       filename = args[0];
130     }
140   PersistentTime time = null;
150   FileInputStream fis = null;
160   ObjectInputStream in = null;
170   try
180   {
190     fis = new FileInputStream(filename);
200     in = new ObjectInputStream(fis);
210     time = (PersistentTime)in.readObject();
220     in.close();
230   }
240   catch(IOException ex)
250   {
260     ex.printStackTrace();
270   }
280   catch(ClassNotFoundException ex)
290   {
300     ex.printStackTrace();
310   }
320   // print out restored time
330   System.out.println("Flattened time: " + time.getTime());
340   System.out.println();
350      // print out the current time
360   System.out.println("Current time: " + Calendar.getInstance().getTime());
370 }
380}
In the code above, the object's restoration occurs on line 210 with the ObjectInputStream.readObject() method call. The method call reads in the raw bytes that we previously persisted and creates a live object that is an exact replica of the original. Because readObject() can read any serializable object, a cast to the correct type is required. With that in mind, the class file must be accessible from the system in which the restoration occurs. In other words, the object's class file and methods are not saved; only the object's state is saved.
Later, on line 360, we simply call the getTime() method to retrieve the time that the original object flattened. The flatten time is compared to the current time to demonstrate that the mechanism indeed worked as expected.

Nonserializable Objects

The basic mechanism of Java serialization is simple to use, but there are some more things to know. As mentioned before, only objects marked Serializable can be persisted. The java.lang.Object class does not implement that interface. Therefore, not all the objects in Java can be persisted automatically. The good news is that most of them -- like AWT and Swing GUI components, strings, and arrays -- are serializable.
On the other hand, certain system-level classes such as Thread, OutputStream and its subclasses, and Socket are not serializable. Indeed, it would not make any sense if they were. For example, thread running in my JVM would be using my system's memory. Persisting it and trying to run it in your JVM would make no sense at all. Another important point about java.lang.Object not implementing the Serializable interface is that any class you create that extends only Object (and no other serializable classes) is not serializable unless you implement the interface yourself (as done with the previous example).
That situation presents a problem: what if we have a class that contains an instance of Thread? In that case, can we ever persist objects of that type? The answer is yes, as long as we tell the serialization mechanism our intentions by marking our class's Thread object as transient.
Let's assume we want to create a class that performs an animation. I will not actually provide the animation code here, but here is the class we'll use:
10 import java.io.Serializable;
20 public class PersistentAnimation implements Serializable, Runnable
30 {
40 transient private Thread animator;
50 private int animationSpeed;
60 public PersistentAnimation(int animationSpeed)
70 {
80 this.animationSpeed = animationSpeed;
90 animator = new Thread(this);
100     animator.start();
110   }
120       public void run()
130   {
140     while(true)
150     {
160       // do animation here
170     }
180   }
190 }
When we create an instance of the PersistentAnimation class, the thread animator will be created and started as we expect. We've marked the thread on line 40 transient to tell the serialization mechanism that the field should not be saved along with the rest of that object's state (in that case, the field speed). The bottom line: you must mark transient any field that either cannot be serialized or any field you do not want serialized. Serialization does not care about access modifiers such as private -- all nontransient fields are considered part of an object's persistent state and are eligible for persistence.
Therefore, we have another rule to add. Here are both rules concerning persistent objects:
  • Rule #1: The object to be persisted must implement the Serializable interface or inherit that implementation from its object hierarchy
  • Rule #2: The object to be persisted must mark all nonserializable fields transient

Customize the Default Protocol

Let's move on to the second way to perform serialization: customize the default protocol. Though the animation code above demonstrates how a thread could be included as part of an object while still making that object be serializable, there is a major problem with it if we recall how Java creates objects. To wit, when we create an object with the new keyword, the object's constructor is called only when a new instance of a class is created. Keeping that basic fact in mind, let's revisit our animation code. First, we instantiate an object of type PersistentAnimation, which begins the animation thread sequence. Next, we serialize the object with that code:
PersistentAnimation animation = new PersistentAnimation(10);
FileOutputStream fos = ...
ObjectOutputStream out = new ObjectOutputStream(fos);
out.writeObject(animation);
All seems fine until we read the object back in with a call to the readObject() method. Remember, a constructor is called only when a new instance is created. We are not creating a new instance here, we are restoring a persisted object. The end result is the animation object will work only once, when it is first instantiated. Kind of makes it useless to persist it, huh?
Well, there is good news. We can make our object work the way we want it to; we can make the animation restart upon restoration of the object. To accomplish that, we could, for example, create a startAnimation() helper method that does what the constructor currently does. We could then call that method from the constructor, after which we read the object back in. Not bad, but it introduces more complexity. Now, anyone who wants to use that animation object will have to know that method has to be called following the normal deserialization process. That does not make for a seamless mechanism, something the Java Serialization API promises developers.
There is, however, a strange yet crafty solution. By using a built-in feature of the serialization mechanism, developers can enhance the normal process by providing two methods inside their class files. Those methods are:
  • private void writeObject(ObjectOutputStream out) throws IOException;
  • private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundException;
Notice that both methods are (and must be) declared private, proving that neither method is inherited and overridden or overloaded. The trick here is that the virtual machine will automatically check to see if either method is declared during the corresponding method call. The virtual machine can call private methods of your class whenever it wants but no other objects can. Thus, the integrity of the class is maintained and the serialization protocol can continue to work as normal. The serialization protocol is always used the same way, by calling either ObjectOutputStream.writeObject() or ObjectInputStream.readObject(). So, even though those specialized private methods are provided, the object serialization works the same way as far as any calling object is concerned.
Considering all that, let's look at a revised version of PersistentAnimation that includes those private methods to allow us to have control over the deserialization process, giving us a pseudo-constructor:
10 import java.io.Serializable;
20 public class PersistentAnimation implements Serializable, Runnable
30 {
40 transient private Thread animator;
50 private int animationSpeed;
60 public PersistentAnimation(int animationSpeed)
70 {
80 this.animationSpeed = animationSpeed;
90 startAnimation();
100   }
110       public void run()
120   {
130     while(true)
140     {
150       // do animation here
160     }
170   }
180   private void writeObject(ObjectOutputStream out) throws IOException
190   {
200     out.defaultWriteObject();
220   }
230   private void readObject(ObjectInputStream in) throws IOException, ClassNotFoundException
240   {
250     // our "pseudo-constructor"
260     in.defaultReadObject();
270     // now we are a "live" object again, so let's run rebuild and start
280     startAnimation();
290
300   }
310   private void startAnimation()
320   {
330     animator = new Thread(this);
340     animator.start();
350   }
360 }
Notice the first line of each of the new private methods. Those calls do what they sound like -- they perform the default writing and reading of the flattened object, which is important because we are not replacing the normal process, we are only adding to it. Those methods work because the call to ObjectOutputStream.writeObject() kicks off the serialization protocol. First, the object is checked to ensure it implements Serializable and then it is checked to see whether either of those private methods are provided. If they are provided, the stream class is passed as the parameter, giving the code control over its usage.
Those private methods can be used for any customization you need to make to the serialization process. Encryption could be added to the output and decryption to the input (note that the bytes are written and read in cleartext with no obfuscation at all). They could be used to add extra data to the stream, perhaps a company versioning code. The possibilities are truly limitless.

Stop That Serialization!

OK, we have seen quite a bit about the serialization process, now let's see some more. What if you create a class whose superclass is serializable but you do not want that new class to be serializable? You cannot unimplement an interface, so if your superclass does implement Serializable, your new class implements it, too (assuming both rules listed above are met). To stop the automatic serialization, you can once again use the private methods to just throw the NotSerializableException. Here is how that would be done:
10 private void writeObject(ObjectOutputStream out) throws IOException
20 {
30 throw new NotSerializableException("Not today!");
40 }
50 private void readObject(ObjectInputStream in) throws IOException
60 {
70 throw new NotSerializableException("Not today!");
80 }
Any attempt to write or read that object will now always result in the exception being thrown. Remember, since those methods are declared private, nobody could modify your code without the source code available to them -- no overriding of those methods would be allowed by Java.

Create Your Own Protocol: the Externalizable Interface

Our discussion would be incomplete not to mention the third option for serialization: create your own protocol with the Externalizable interface. Instead of implementing the Serializable interface, you can implement Externalizable, which contains two methods:
  • public void writeExternal(ObjectOutput out) throws IOException;
  • public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException;
Just override those methods to provide your own protocol. Unlike the previous two serialization variations, nothing is provided for free here, though. That is, the protocol is entirely in your hands. Although it's the more difficult scenario, it's also the most controllable. An example situation for that alternate type of serialization: read and write PDF files with a Java application. If you know how to write and read PDF (the sequence of bytes required), you could provide the PDF-specific protocol in the writeExternal and readExternal methods.
Just as before, though, there is no difference in how a class that implements Externalizable is used. Just call writeObject() or readObject and, voila, those externalizable methods will be called automatically.

Gotchas

There are a few things about the serialization protocol that can seem very strange to developers who are not aware. Of course, that is the purpose of the article -- to get you aware! So let's discuss a few of those gotchas and see if we can understand why they exist and how to handle them.

Caching Objects in the Stream

First, consider the situation in which an object is written to a stream and then written again later. By default, an ObjectOutputStream will maintain a reference to an object written to it. That means that if the state of the written object is written and then written again, the new state will not be saved! Here is a code snippet that shows that problem in action:
10 ObjectOutputStream out = new ObjectOutputStream(...);
20 MyObject obj = new MyObject(); // must be Serializable
30 obj.setState(100);
40 out.writeObject(obj); // saves object with state = 100
50 obj.setState(200);
60 out.writeObject(obj); // does not save new object state
There are two ways to control that situation. First, you could make sure to always close the stream after a write call, ensuring the new object is written out each time. Second, you could call the ObjectOutputStream.reset() method, which would tell the stream to release the cache of references it is holding so all new write calls will actually be written. Be careful, though -- the reset flushes the entire object cache, so all objects that have been written could be rewritten.

Version Control

With our second gotcha, imagine you create a class, instantiate it, and write it out to an object stream. That flattened object sits in the file system for some time. Meanwhile, you update the class file, perhaps adding a new field. What happens when you try to read in the flattened object?
Well, the bad news is that an exception will be thrown -- specifically, the java.io.InvalidClassException -- because all persistent-capable classes are automatically given a unique identifier. If the identifier of the class does not equal the identifier of the flattened object, the exception will be thrown. However, if you really think about it, why should it be thrown just because I added a field? Couldn't the field just be set to its default value and then written out next time?
Yes, but it takes a little code manipulation. The identifier that is part of all classes is maintained in a field called serialVersionUID. If you wish to control versioning, you simply have to provide the serialVersionUID field manually and ensure it is always the same, no matter what changes you make to the classfile. You can use a utility that comes with the JDK distribution called serialver to see what that code would be by default (it is just the hash code of the object by default).
Here is an example of using serialver with a class called Baz:
> serialver Baz
> Baz: static final long serialVersionUID = 10275539472837495L;
Simply copy the returned line with the version ID and paste it into your code. (On a Windows box, you can run that utility with the - show option to simplify the copy and paste procedure.) Now, if you make any changes to the Baz class file, just ensure that same version ID is specified and all will be well.
The version control works great as long as the changes are compatible. Compatible changes include adding or removing a method or a field. Incompatible changes include changing an object's hierarchy or removing the implementation of the Serializable interface. A complete list of compatible and incompatible changes is given in the Java Serialization Specification.

Performance Considerations

Our third gotcha: the default mechanism, although simple to use, is not the best performer. I wrote out a Date object to a file 1,000 times, repeating that procedure 100 times. The average time to write out the Date object was 115 milliseconds. I then manually wrote out the Date object, using standard I/O the same number of iterations; the average time was 52 milliseconds. Almost half the time! There is often a trade-off between convenience and performance, and serialization proves no different. If speed is the primary consideration for your application, you may want to consider building a custom protocol.
Another consideration concerns the aforementioned fact that object references are cached in the output stream. Due to that, the system may not garbage collect the objects written to a stream if the stream is not closed. The best move, as always with I/O, is to close the streams as soon as possible, following the write operations.

Sunday, January 30, 2011

Singleton class:::Nucleous Software Java Interview Questions: Qu7

What is Singleton Class: How will you make a class singloton....
----------------------------------------------------------------------

Singleton class:
This is a class which can be instatiated only once.

Eg:

Public class Singleton
{
private static single = new Singleton();

Private Singleton();
{}
}


For a singleton class, the constructor is made private and
a static variable is used for instatiating the class.
 
 

Implicit objects in JSP::Nucleous Software Java Interview Questions: Qu5

Implicit objects in JSP
-----------------------------------------------------------------------------------------------------------

Implicit objects in jsp are the objects that are created by the container automatically and the container makes them available to the developers, the developer do not need to create them explicitly. Since these objects are created automatically by the container and are accessed using standard variables; hence, they are called implicit objects. The implicit objects are parsed by the container and inserted into the generated servlet code. They are available only within the jspService method and not in any declaration. Implicit objects are used for different purposes. Our own methods (user defined methods) can't access them as they are local to the service method and are created at the conversion time of a jsp into a servlet. But we can pass them to our own method if we wish to use them locally in those functions.

There are nine implicit objects. Here is the list of all the implicit objects: 


Object Class
application javax.servlet.ServletContext
config javax.servlet.ServletConfig
exception java.lang.Throwable
out javax.servlet.jsp.JspWriter
page java.lang.Object
PageContext javax.servlet.jsp.PageContext
request javax.servlet.ServletRequest
response javax.servlet.ServletResponse
session javax.servlet.http.HttpSession

Stored Procedure and Function::::Nucleous Software Java Interview Questions: Qu4

Difference between Stored Procedure and Function
-----------------------------------------------------------------------------------------------------------
                                      Answer 1

1) functions are used for computations where as procedures can be used for performing business logic

2) functions MUST return a value, procedures need not be.

3) you can have DML(insert, update, delete) statements in a function. But, you cannot call such a function in a SQL
query..eg: suppose, if u have a function that is updating a table.. you can't call that function in any sql query.-
select myFunction(field) from sometable; will throw error.

4) function parameters are always IN, no OUT is possible
 
---------------------------------------------------------------------------------------------------------
                                          Answer 2: 

Functions and stored procedures serve separate purposes. Although it's not the best analogy, functions can be viewed literally as any other function you'd use in any programming language, but stored procs are more like individual programs or a batch script.
Functions normally have an output and optionally inputs. The output can then be used as the input to another function (a SQL Server built-in such as DATEDIFF, LEN, etc) or as a predicate to a SQL Query - e.g., SELECT a, b, dbo.MyFunction(c) FROM table or SELECT a, b, c FROM table WHERE a = dbo.MyFunc(c).
Stored procs are used to bind SQL queries together in a transaction, and interface with the outside world. Frameworks such as ADO.NET, etc. can't call a function directly, but they can call a stored proc directly.
Functions do have a hidden danger though: they can be misused and cause rather nasty performance issues: consider this query:

SELECT * FROM dbo.MyTable WHERE col1 = dbo.MyFunction(col2)

Where MyFunction is declared as:

CREATE FUNCTION MyFunction (@someValue INTEGER) RETURNS INTEGERAS
BEGIN
   
DECLARE @retval INTEGER

   
SELECT localValue
     
FROM dbo.localToNationalMapTable
     
WHERE nationalValue = @someValue

   
RETURN @retvalEND
 
What happens here is that the function MyFunction is called for every row in the table MyTable. If MyTable has 1000 rows, then that's another 1000 ad-hoc queries against the database. Similarly, if the function is called when specified in the column spec, then the function will be called for each row returned by the SELECT.
So you do need to be careful writing functions. If you do SELECT from a table in a function, you need to ask yourself whether it can be better performed with a JOIN in the parent stored proc or some other SQL construct (such as CASE ... WHEN ... ELSE ... END).
 
 

Transient and Volatile ::::Nucleous Software Java Interview Questions: Qu3

Transient and Volatile keywords in java

Answers 1: Java defines two interesting type modifiers: transient and volatile. These modifiers are used to handle somewhat specialized situations.
When an instance variable is declared as transient, then its value need not persist when an object is stored. 

For example:
class T {
transient int a; // will not persist
int b; // will persist
}

Here, if an object of type T is written to a persistent storage area, the contents of a would not be saved, but the contents of b would. The volatile modifier tells the compiler that the variable modified by volatile can be changed unexpectedly by other parts of your program. One of these situations involves multithreaded programs.  In a multithreaded program, sometimes, two or more threads share the same instance variable. For efficiency considerations, each thread can keep its own, private copy of such a shared variable. The real (or master) copy of the variable is updated at various times, such as when a synchronized method is entered. While this approach works fine, it may be inefficient at times. In some cases, all that really matters is that the master copy of a variable always reflects its current state. To ensure this, simply specify the variable as volatile, which tells the compiler that it must always use the master copy of a volatile variable (or, at least, always keep any private copies up to date with the master copy, and vice versa). Also, accesses to the master variable must be executed in the precise order in which they are executed on any private copy.


Note: volatile in Java has, more or less, the same meaning that it has in C/C++.

Answer 2: volatile keyword tell the the compiler that the variable modified by "volatile" can be cahnged unexpectedly by other parts of ur program


Transient:when a variable is declared as transient ,thenits value need not persist when the object is stored.

What is index ::Nucleous Software Java Interview Questions: Qu2

What is index and why it is used??

Answers: 

1. Oracle includes numerous data structures to improve the speed of Oracle SQL queries. Taking advantage of the low cost of disk storage, Oracle includes many new indexing algorithms that dramatically increase the speed with which Oracle queries are serviced. This article explores the internals of Oracle indexing; reviews the standard b-tree index, bitmap indexes, function-based indexes, and index-only tables (IOTs); and demonstrates how these indexes may dramatically increase the speed of Oracle SQL queries.

Oracle uses indexes to avoid the need for large-table, full-table scans and disk sorts, which are required when the SQL optimizer cannot find an efficient way to service the SQL query. I begin our look at Oracle indexing with a review of standard Oracle b-tree index methodologies.



2. Oracle indexes : Proper database indexing is a crucial factor for your database performance.
Most Oracle databases have hundreds or even thousands of indexes. This large number of indexes and their complexity make index tuning and monitoring a difficult task for the DBA.
As time goes, even originally efficient indexes may become inefficient due to various index distortions caused by data changes in the indexed tables. 

The question is: How to manage Oracle indexes and what different options are available to use them?

Indexes are logically and physically independent of the data in the associated table. The DBA can create or drop an index at anytime without affecting the base tables or other indexes. If the DBA drops an index, all applications continue to work.

However, access to previously indexed data might be slower.Indexes, being independent structures, require storage space. 

Oracle automatically maintains and uses indexes after they are created. Oracle automatically reflects changes to data, such as adding new rows, updating rows, or deleting rows, in all relevant indexes with no additional action by users.

Statement and Prepared Statement:::::Nucleous Software Java Interview Questions: Qu1

Difference between Statement and Prepared Statement in JDBC


1. The PreparedStatement is a slightly more powerful version of a Statement, and should always be at least as quick and easy to handle as a Statement

The PreparedStatement may be parametrized.

2. In general a PreparedStatement will outperform a Statement, but there are cases in which a Statement is faster, and you may have created this situation. The main advantage to a PreparedStatement is that it does not need to be compiled on the database every time, the database compiles it once and prepares itself as well as it can given that it doesn't know everything about the query. It may be possible to add more performance tweaks, but the database just can't be sure.

When it comes to a Statement with no unbound variables, the dtaabae is free to optimise to its full extent. The individual query will be faster, but the down side is that you need to do the database compilation all the time, and this is worse than the benefit of the faster query.

Unless the same Statement gets run over and over, in which case it will be cached too and behave like a better optimised PreparedStatement. In practice I've found the difference is not actually worth worrying about. Even if a PreparedStatement is 'no better than' a Statement, it's flexibility is much nicer. 


3.
InterfacesRecommended Use
StatementUse for general-purpose access to your database. Useful when you are using static SQL statements at runtime. The Statement interface cannot accept parameters.
PreparedStatementUse when you plan to use the SQL statements many times. The PreparedStatement interface accepts input parameters at runtime.
CallableStatementUse when you want to access database stored procedures. The CallableStatement interface can also accept runtime input parameters.

Wednesday, January 19, 2011

Constraints information in the database catalog in DB2

Table 1. Constraints information in the database catalog

Catalog viewView columnDescriptionQuery example
SYSCAT.CHECKS
Contains a row for each table check constraintdb2 select constname, tabname, text from syscat.checks
SYSCAT.COLCHECKS
Contains a row for each column that is referenced by a table check constraintdb2 select constname, tabname, colname, usage from syscat.colchecks
SYSCAT.COLUMNSNULLSIndicates whether a column is nullable (Y) or not nullable (N)db2 select tabname, colname, nulls from syscat.columns where tabschema = 'DELSVT' and nulls = 'N'
SYSCAT.CONSTDEP
Contains a row for each dependency of a constraint on some other objectdb2 select constname, tabname, btype, bname from syscat.constdep
SYSCAT.INDEXES
Contains a row for each index.db2 select tabname, uniquerule, made_unique, system_required from syscat.indexes where tabschema = 'DELSVT'
SYSCAT.KEYCOLUSE
Contains a row for each column that participates in a key defined by a unique, primary key, or foreign key constraintdb2 select constname, tabname, colname, colseq from syscat.keycoluse
SYSCAT.REFERENCES
Contains a row for each referential constraintdb2 select constname, tabname, refkeyname, reftabname, colcount, deleterule, updaterule from syscat.references
SYSCAT.TABCONST
Contains a row for each unique (U), primary key (P), foreign key (F), or table check (K) constraintdb2 select constname, tabname, type from syscat.tabconst
SYSCAT.TABLESPARENTSNumber of parent tables of this table (the number of referential constraints in which this table is a dependent)db2 "select tabname, parents from syscat.tables where parents > 0"
SYSCAT.TABLESCHILDRENNumber of dependent tables of this table (the number of referential constraints in which this table is a parent)db2 "select tabname, children from syscat.tables where children > 0"
SYSCAT.TABLESSELFREFSNumber of self-referencing referential constraints for this table (the number of referential constraints in which this table is both a parent and a dependent)db2 "select tabname, selfrefs from syscat.tables where selfrefs > 0"
SYSCAT.TABLESKEYUNIQUENumber of unique constraints (other than primary key) defined on this tabledb2 "select tabname, keyunique from syscat.tables where keyunique > 0"
SYSCAT.TABLESCHECKCOUNTNumber of check constraints defined on this tabledb2 "select tabname, checkcount from syscat.tables where checkcount > 0"

How to see Primary key in a table in DB2

---------See primary key with column name in a table in db2---------------
db2  "select constname,tabname,colname from syscat.keycoluse where tabname='TABLENAME' "

It will give you.....

Example:::  

db2inst1@confonet:~> db2 "select constname,tabname,colname from syscat.keycoluse where tabname='COMPLAINANT_RESPONDENT'"

--------------Output--------------------------------
CONSTNAME                                                                                                                        TABNAME                                                                                                                          COLNAME                                                                                                             
-------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------
SQL100416165400090                                                                                                               COMPLAINANT_RESPONDENT                                                                                                           CRSEQ                                                                                                               
SQL100416165400090                                                                                                               COMPLAINANT_RESPONDENT                                                                                                           CR_TYPE                                                                                                             
SQL100416165400090                                                                                                               COMPLAINANT_RESPONDENT                                                                                                           FANO                                                                                                                

  3 record(s) selected.
==========================================================

Here COMPLAINANT_RESPONDENT table have a composite primary key on three column (FANO,CRSEQ,CR_TYPE)

Saturday, January 15, 2011

IBM Interview Questions: Struts workflow

Qu7: What is struts and how ActionServlet works...


Answers:
The following events happen when the Client browser issues an HTTP request.
  • The ActionServlet receives the request.
  • The struts-config.xml file contains the details regarding the Actions, ActionForms, ActionMappings and ActionForwards.
  • During the startup the ActionServelet reads the struts-config.xml file and creates a database of configuration objects. Later while processing the request the ActionServlet makes decision by refering to this object.
When the ActionServlet receives the request it does the following tasks.
  • Bundles all the request values into a JavaBean class which extends Struts ActionForm class.
  • Decides which action class to invoke to process the request.
  • Validate the data entered by the user.
  • The action class process the request with the help of the model component. The model interacts with the database and process the request.
  • After completing the request processing the Action class returns an ActionForward to the controller.
  • Based on the ActionForward the controller will invoke the appropriate view.
  • The HTTP response is rendered back to the user by the view component.

IBM Interview Questions: Make a class Immutable

Qu6: How will you make a class immutable??

Answer: Just make the class final so that it can not be subclassed and then also make final to it's variables and methods so that method can not be override and variables can not be modified...
--------------------------------------------------------------------------------------------------------------------------
Making a class immutable
 
Immutability must be familiar to every one when we talk about String & StringBuffer classes in java. Strings are considered immutable because the values contained in the reference variable cannot be changed. Whereas String Buffer is considered mutable because the value in a string buffer can be changed (i.e. mutable).
However I always thought how to make our user defined classes as immutable though I am unaware as to why any one would need this.
The reason perhaps might be clear once we have a look at the code.
Now in order to make a class immutable we must restrict changing the state of the class object by any means. This in turn means avoiding an assignment to a variable. We can achieve this through a final modifier. To further restrict the access we can use a private access modifier. Above do not provide any method where we modify the instance variables.
Still done? No. How if some body creates a sub class from our up till now immutable class? Yes here lies the problem. The new subclass can contain methods, which over ride our base class (immutable class) methods. Here he can change the variable values.
Hence make the methods in the class also final. Or a better approach. Make the immutable class itself final. Hence cannot make any sub classes, so no question of over ridding.

The following code gives a way to make the class immutable.

// The immutable class which is made final
final class MyImmutableClass
{
// instance var are made private & final to restrict the access
private final int count;
private final double value;
// Constructor where we can provide the constant value
public MyImmutableClass(int paramCount,double paramValue)
{
   count = paramCount;
   value = paramValue;
}
// provide only methods which return the instance var
// & not change the values
public int getCount()
{
   return count;
}
public double getValue()
{
  return value;
}
}
// class TestImmutable
public class TestImmutable
{
  public static void main(String[] args)
  {
   MyImmutableClass obj1 = new MyImmutableClass(3,5);
   System.out.println(obj1.getCount());
   System.out.println(obj1.getValue());
   // there is no way to change the values of count & value-
  // no method to call besides getXX, no subclassing, no public access to var -> Immutable
}
}
The possible use of immutable classes would be a class containing a price list represented for a set of products.
Otherwise also this represents a good design.


---------------------------------------------------------------------------------------------------------------------------
If you know better answer..Please reply......

enum (in Java)::::Interview Questions

Define enum 
----------------------------------------------------------------------------------------------
Answer: In prior releases, the standard way to represent an enumerated type was the int Enum pattern:
// int Enum Pattern - has severe problems!
public static final int SEASON_WINTER = 0;
public static final int SEASON_SPRING = 1;
public static final int SEASON_SUMMER = 2;
public static final int SEASON_FALL = 3;
This pattern has many problems, such as:
  • Not typesafe - Since a season is just an int you can pass in any other int value where a season is required, or add two seasons together (which makes no sense).
  • No namespace - You must prefix constants of an int enum with a string (in this case SEASON_) to avoid collisions with other int enum types.
  • Brittleness - Because int enums are compile-time constants, they are compiled into clients that use them. If a new constant is added between two existing constants or the order is changed, clients must be recompiled. If they are not, they will still run, but their behavior will be undefined.
  • Printed values are uninformative - Because they are just ints, if you print one out all you get is a number, which tells you nothing about what it represents, or even what type it is.
It is possible to get around these problems by using the Typesafe Enum pattern (see Effective Java Item 21), but this pattern has its own problems: It is quite verbose, hence error prone, and its enum constants cannot be used in switch statements. In 5.0, the Java™ programming language gets linguistic support for enumerated types. In their simplest form, these enums look just like their C, C++, and C# counterparts:
enum Season { WINTER, SPRING, SUMMER, FALL }
But appearances can be deceiving. Java programming language enums are far more powerful than their counterparts in other languages, which are little more than glorified integers. The new enum declaration defines a full-fledged class (dubbed an enum type). In addition to solving all the problems mentioned above, it allows you to add arbitrary methods and fields to an enum type, to implement arbitrary interfaces, and more. Enum types provide high-quality implementations of all the Object methods. They are Comparable and Serializable, and the serial form is designed to withstand arbitrary changes in the enum type

IBM Interview Questions: difference between abstraction and encapsulation

Difference between abstraction and encapsulation


---------------------------------------------------------------------------------------------------------

  • Abstraction lets you focus on what the object does instead of how it does it
  • Encapsulation means hiding the internal details or mechanics of how an object does something.
 ---------------------------------------------------------------------------------------------------------
Abstraction: The idea of presenting something in a simplified / different way, which is either easier to understand and use or more pertinent to the situation.
Consider a class that sends an email... it uses abstraction to show itself to you as some kind of messenger boy, so you can call emailSender.send(mail, recipient). What it actually does - chooses POP3 / SMTP, calling servers, MIME translation, etc, is abstracted away. You only see your messenger boy.


Encapsulation: The idea of securing and hiding data and methods that are private to an object. It deals more with making something independent and foolproof.
Take me, for instance. I encapsulate my heart rate from the rest of the world. Because I don't want anyone else changing that variable, and I don't need anyone else to set it in order for me to function. Its vitally important to me, but you don't need to know what it is, and you probably don't care anyway.

Look around you'll find that almost everything you touch is an example of both abstraction and encapsulation. Your phone, for instance presents to you the abstraction of being able to take what you say and say it to someone else - covering up GSM, processor architecture, radio frequencies, and a million other things you don't understand or care to. It also encapsulates certain data from you, like serial numbers, ID numbers, frequencies, etc.

IBM Interview Questions: vector implementation

Qu5: Tell in a web based project where we need to implement Vector..

Answer:

java.util
Class Vector

java.lang.Object
extended byjava.util.AbstractCollection
extended byjava.util.AbstractList
extended byjava.util.Vector
All Implemented Interfaces:
Cloneable, Collection, List, RandomAccess, Serializable
Direct Known Subclasses:
Stack


public class Vector
extends AbstractList
implements List, RandomAccess, Cloneable, Serializable
The Vector class implements a growable array of objects. Like an array, it contains components that can be accessed using an integer index. However, the size of a Vector can grow or shrink as needed to accommodate adding and removing items after the Vector has been created.
Each vector tries to optimize storage management by maintaining a capacity and a capacityIncrement. The capacity is always at least as large as the vector size; it is usually larger because as components are added to the vector, the vector's storage increases in chunks the size of capacityIncrement. An application can increase the capacity of a vector before inserting a large number of components; this reduces the amount of incremental reallocation.

As of the Java 2 platform v1.2, this class has been retrofitted to implement List, so that it becomes a part of Java's collection framework. Unlike the new collection implementations, Vector is synchronized.
The Iterators returned by Vector's iterator and listIterator methods are fail-fast: if the Vector is structurally modified at any time after the Iterator is created, in any way except through the Iterator's own remove or add methods, the Iterator will throw a ConcurrentModificationException.

Thus, in the face of concurrent modification, the Iterator fails quickly and cleanly, rather than risking arbitrary, non-deterministic behavior at an undetermined time in the future. The Enumerations returned by Vector's elements method are not fail-fast.

Note that the fail-fast behavior of an iterator cannot be guaranteed as it is, generally speaking, impossible to make any hard guarantees in the presence of unsynchronized concurrent modification. Fail-fast iterators throw ConcurrentModificationException on a best-effort basis.

Therefore, it would be wrong to write a program that depended on this exception for its correctness: the fail-fast behavior of iterators should be used only to detect bugs.

This class is a member of the Java Collections Framework.


---------------------------------------------------------------------------------------------------------------------------
If you know better answer..Please reply......

IBM Interview Questions: Qu3, Qu4

Qu3: Difference between Abstract Class and Interface.


Feature
Interface
Abstract class
Multiple inheritance
A class may inherit several interfaces.
A class may inherit only one abstract class.
Default implementation
An interface cannot provide any code, just the signature.
An abstract class can provide complete, default code and/or just the details that have to be overridden.
Access Modfiers An interface cannot have access modifiers for the subs, functions, properties etc everything is assumed as public An abstract class can contain access modifiers for the subs, functions, properties
Core VS Peripheral
Interfaces are used to define the peripheral abilities of a class. In other words both Human and Vehicle can inherit from a IMovable interface.
An abstract class defines the core identity of a class and there it is used for objects of the same type.
Homogeneity
If various implementations only share method signatures then it is better to use Interfaces.
If various implementations are of the same kind and use common behaviour or status then abstract class is better to use.
Speed
Requires more time to find the actual method in the corresponding classes.
Fast
Adding functionality (Versioning)
If we add a new method to an Interface then we have to track down all the implementations of the interface and define implementation for the new method.
If we add a new method to an abstract class then we have the option of providing default implementation and therefore all the existing code might work properly.
Fields and Constants No fields can be defined in interfaces An abstract class can have fields and constrants defined

 --------------------------------------------------------------------------------------------------
Qu4: Difference Hashtable and Hashmap.

1.HashMap is a New Java Class , where as HashTable is Old.

2.HashMap is not Synchronized where as hashTable is.

3.HashMap allows null as both key and value, where as HashTable does?nt allow null.

4. HashMap retrieval is not in order (random). HashTable provide ordered retrieval.

5. Hash table is thread safe, hash map is not

6. Hashmap is fastest in terms of searching while Hashtable is not so much.