Popular Posts

Monday, January 31, 2011

Gmail Multiple User Names...


Many of you might not be knowing that if you have 1 gmail id … you have infinte gmail ids…
Seems to be a joke.. It’s not actually…
for e.g  if you have gmail id:
venuhyd@gmail.com
then the mail sent to following ids will also be received by your mailbox:
venu+hyd@gmail.com  8char can be any valid charcter set of max 8 characters
v.e.n.u.h.y.d@gmail.com
venuhy.d@gmail.com
As gmail ignores .’s in its username...
An alias made by mixture of various such alias is also an alias for your id. 
like  my.i.d+love@googlemail.com will alsodirect uts mail to your inbox.
You can use it to filter your e-mails from some sources by giving them alias id and then using a filter with to field..
Just Check this out...
Regrads 
Venu.

Find Kth largest and smallest number in BST

How to find kth smallest element in BST. you cannot use static/global variable and you cannot pass value of k to any function ?
Solution : Make in order traversal and out put the kth element.
Let each node in the BST have a field that returns the number of elements in its left and right subtree. Let the left subtree of node T contain only elements smaller than T and the right subtree only elements larger than or equal to T.
Now, suppose we are at node T:
1.       k == num_elements(left subtree of T), then the answer we're looking for is the value in node T
2.       k > num_elements(left subtree of T) then obviously we can ignore the left subtree, because those elements will also be smaller than the kth smallest. So, we reduce the problem to finding the k - num_elements(left subtree of T) smallest element of the right subtree.
3.       k < num_elements(left subtree of T), then the kth smallest is somewhere in the left subtree, so we reduce the problem to finding the kth smallest element in the left subtree.
This is O(log N) on average (assuming a balanced tree).

Java code:

public int ReturnKthSmallestElement1(int k)
{
Node node = Root;

int count = k;

int sizeOfLeftSubtree = 0;

while(node != null)
{

sizeOfLeftSubtree = node.SizeOfLeftSubtree();

if (sizeOfLeftSubtree + 1 == count)
return node.Value;
else if (sizeOfLeftSubtree < count)
{
node = node.Right;
count -= sizeOfLeftSubtree+1;
}
else
{
node = node.Left;
}
}

return -1;
}
find(node, k) {
  if (count(node->left) + 1 == k) result = node;
  else if (count(node->left) < k) result = find(node->right, k - count(node->left) - 1);
  return find(node->left, k);
For largest element use node->right
struct tree* findkthsmallest(struct tree* t,int k)
{
int left_size=0;

if(t->left!=NULL) left_size=t->left->size;

if(left_size+1==k) return t;

else if(k<=left_size) return findkthsmallest(t->left,k);

else return findkthsmallest(t->right,k-left_size-1);//find k-left_size-1 smallest in right subtree.
}

struct tree* findkthlargest(struct tree* t,int k)
{
int right_size=0;

if(t->right!=NULL) right_size=t->right->size;

if(right_size+1==k) return t;

else if(k<=right_size) return findkthlargest(t->right,k);

else return findkthlargest(t->left,k-right_size-1);
}

Trie Data structure

Refer : http://linux.thai.net/~thep/datrie/datrie.html


 "Why use tries if set <string> and hash tables can do the same?" There are two main reasons:
  • The tries can insert and find strings in O(L) time (where L represent the length of a single word). This is much faster than set , but is it a bit faster than a hash table.
  • The set <string> and the hash tables can only find in a dictionary words that match exactly with the single word that we are finding; the trie allow us to find words that have a single character different, a prefix in common, a character missing, etc.
Refer for java program : http://www.technicalypto.com/2010/04/trie-data-structure-part-2-node-and.html

http://flexaired.blogspot.com/2009/12/flex-email-client-for-google-app-engine_13.html

http://www1bpt.bridgeport.edu/~dichter/cs201/Trie.Handout.doc



Q)best way , Memory-wise, to store 10 Crores mobile numbers ?
a trie can be considered as a tree with 26 child nodes, i.e. from 'a' to 'z'....where each node consists of a character.E.g. to represent sachin, we create a trie which has s->a->c->h->i->n,  i.e. 6 nodes. Now when we want to add sachim into the database, we just create another child node of i, which would be m. Now there are two nodes going out from i.This way we keep on adding nodes as and when required. Now to detect each word, we end the word with a spl character, which will be inserted after the last child node so that whenever we  encounter that character, we know that we have reached the end of a word.


Trie is better in terms of scalability and performance.With Hashtable,There is a problem of rehashing when all buckets are full and rehashing takes O(N).Although it happens once in a blue moon.That can impact the performance in a production environment.Trie doesn't have this problem. 
http://www.youtube.com/watch?v=uhAUk63tLRM

Difference between Tree Map and Hash Map


HashMap
HashMap allows null as both keys and values. HashMap is useful when we need to access the map without cosidering how they are added to the map (means, unordered lookup of values using their keys). HashMap is synchronized while it is being looked up. HashMap doesn’t allow duplicated entries.
The performance of HashMap is based on two optional parameter which we can specify during the creation of the HashMap. Initial capacity & load factor. Initial capacity is the bucket size assigned to a HashMap during it’s creation. Load factor decides when the HashMap needs to be expanded. If the load factor is 0.75, the size will be increased when the current size of the map crosses 75% of its capacity.

TreeMap
The basic difference between HashMap & TreeMap is that, in a TreeMap the elements are stored in a tree. TreeMap allows us to retrieve the elements in some sorted order defined by the user. So we can say that TreeMap is slower than HashMap. This is the only implementation based onSortedMap interface.
TreeMap allows us to specify an optional Comparator object during its creation. The keys should be compatible with the comparator specified. This comparator decides the order by which the keys need to be sorted.
If we are not specifying the comparator, TreeMap will try to sort the keys in the natural order. Means will consider them as instances of Comparable interface.
A common mistake that everyone does is not to override the hashcode(). If we are failing to do so, map.get(new MyCustomKey(<value>)); may not give you what you were expecting. So it is always adviced to override the hashCode() if objects of that class is being used as a key.

Friday, January 28, 2011

Shell sort

Refer : http://education.cdacmumbai.in/education/pgdst/dsalfac/notes/ShellSort.pdf


Shellsort works by comparing elements that are distant rather than adjacent elements in an array or list where adjacent elements are compared.

Shellsort makes multiple passes through a list and sorts a number of equally sized sets using the insertion sort.

Shellsort improves on the efficiency of insertion sort by quickly shifting values to their destination.

Shellsort is also known as diminishing increment sort.

The distance between comparisons decreases as the sorting algorithm runs until the last phase in which adjacent elements are compared

After each phase and some increment   hk, for every i, we have a[ i ] ≤ a [ i + hk ] all elements spaced hk apart are sorted.

The file is said to be hk – sorted.

Advantages :

Advantage of Shellsort is that its only efficient for medium size lists. For bigger lists, the algorithm is not the best choice. Fastest of all O(N^2) sorting algorithms.

5 times faster than the bubble sort and a little over twice as fast as the insertion sort, its closest competitor.

Disadvantages :

Disadvantage of Shellsort is that it is a complex algorithm and its not nearly as efficient as the merge, heap, and quick sorts.

The shell sort is still significantly slower than the merge, heap, and quick sorts, but its relatively simple algorithm makes it a good choice for sorting lists of less than 5000 items unless speed important. It's also an excellent choice for repetitive sorting of smaller lists.

Best Case: The best case in the shell sort is when the array is already sorted in the right order. The number of comparisons is less.

Thursday, January 27, 2011

Bucket Sort

Bucket sort, or bin sort, is a sorting algorithm that works by partitioning an array into a number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm. It is a distribution sort, and is a cousin of radix sort in the most to least significant digit flavour



Bucket sort works as follows:
  1. Set up an array of initially empty "buckets."
  2. Scatter: Go over the original array, putting each object in its bucket.
  3. Sort each non-empty bucket.
  4. Gather: Visit the buckets in order and put all elements back into the original array.

Optimizations         

A common optimization is to put the elements back in the original array first, then run insertion sort over the complete array; because insertion sort's runtime is based on how far each element is from its final position, the number of comparisons remains relatively small, and the memory hierarchy is better exploited by storing the list contiguously in memory.


Comparision with other sorting techniques:
1.Bucket sort can be seen as a generalization of counting sort; in fact, if each bucket has size 1 then bucket sort degenerates to counting sort.

2.Bucket sort with two buckets is effectively a version of quicksort where the pivot value is always selected to be the middle value of the value range. While this choice is effective for uniformly distributed inputs, other means of choosing the pivot in quicksort such as randomly selected pivots make it more resistant to clustering in the input distribution.

3.The n-way mergesort algorithm also begins by distributing the list into n sublists and sorting each one; however, the sublists created by mergesort have overlapping value ranges and so cannot be recombined by simple concatenation as in bucket sort. Instead, they must be interleaved by a merge algorithm. However, this added expense is counterbalanced by the simpler scatter phase and the ability to ensure that each sublist is the same size, providing a good worst-case time bound.

4.Top-down radix sort can be seen as a special case of bucket sort where both the range of values and the number of buckets is constrained to be a power of two. Consequently, each bucket's size is also a power of two, and the procedure can be applied recursively. This approach can accelerate the scatter phase, since we only need to examine a prefix of the bit representation of each element to determine its bucket.



Java code :


import java.util.ArrayList;
import java.util.Arrays;


public class BucketSort {


public static void main(String args[]) {


Double[] array = { 9.0, 8.0, 7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0 };


ArrayList<Double> masterlist = new ArrayList<Double>(Arrays
.asList(array));


// all of the doubles will be 0.0 <= x < 100.0
double rangeOfBucket = 100.0 / masterlist.size();


// creating a new ArrayList of n buckets
ArrayList[] buckets = new ArrayList[masterlist.size()];
buckets = fillBuckets(buckets);


double d;
for (int i = 0; i < masterlist.size(); i++) {
d = (Double) masterlist.get(i);
// this mess discovers the appropriate bucket and dumps d in it
buckets[(int) (d / rangeOfBucket)].add(d);
}


masterlist = new ArrayList<Double>();


for (int m = 0; m < buckets.length; m++) {
buckets[m] = insertionSort(buckets[m]); // sort the bucket
for (int n = 0; n < buckets[m].size(); n++) {
masterlist.add((Double) buckets[m].get(n));// reading the
// numbers back into
// the original list
// in order
}
}


System.out.println(masterlist.toString());


}


/**
* Modified version of InsertionSort found at
* http://www.samspublishing.com/articles/article.asp?p=31526&seqNum=4&rl=1

* @param a
*            The ArrayList to be sorted
* @return A sorted ArrayList
*/
public static ArrayList insertionSort(ArrayList a) {


int in, out;


for (out = 1; out < a.size(); out++) {
double temp = (Double) a.get(out); // remove marked item


in = out; // start shifts at out


// until one is smaller,
while (in > 0 && (Double) a.get(in - 1) >= temp) {
a.set(in, a.get(in - 1));// shift item right
in--; // go left one position
}


a.set(in, temp); // insert marked item
}


return a;
}


/**
* Helper method to fill an array of ArrayLists with _unique_ ArrayLists,
* something that Arrays.fill() does not do.

* @param b
*            The array of ArrayLists to be filled
* @return The original array filled with empty ArrayLists
*/
private static ArrayList[] fillBuckets(ArrayList[] b) {
for (int i = 0; i < b.length; i++) {
b[i] = new ArrayList();
}
return b;
}


}

Quick Sort

Algorithm :
QUICK SORT (A,p,r)
1 if p < r
2 then q<-- PARTITION(A,p,r)
3 QUICK SORT(A,p,q)
4 QUICK SORT(A,q + 1,r)
Notice that we only examine elements by comparing them to other elements. This makes quicksort a comparison sort. Divide and Conquer Algorithms that solve (conquer) problems by dividing them into smaller sub-problems until the problem is so small that it is trivially solved. in place In place sorting algorithms don't require additional temporary space to store elements as they sort; they use the space originally occupied by the elements. Analysis : The best-case behavior of the quicksort algorithm occurs when in each recursion step the partitioning produces two parts of equal length. In order to sort n elements, in this case the running time is in Θ(n log(n)). This is because the recursion depth is log(n) and on each level there are n elements to be treated (Figure 2 a). The worst case occurs when in each recursion step an unbalanced partitioning is produced, namely that one part consists of only one element and the other part consists of the rest of the elements (Figure 2 c). Then the recursion depth is n-1 and quicksort runs in time Θ(n2). The choice of the comparison element x determines which partition is achieved. Suppose that the first element of the sequence is chosen as comparison element. This would lead to the worst case behavior of the algorithm when the sequence is initially sorted. Therefore, it is better to choose the element in the middle of the sequence as comparison element. Even better would it be to take the n/2-th greatest element of the sequence (the median). Then the optimal partition is achieved. Actually, it is possible to compute the median in linear time [AHU 74]. This variant of quicksort would run in time O(n log(n)) even in the worst case. However, the beauty of quicksort lies in its simplicity. And it turns out that even in its simple form quicksort runs in O(n log(n)) on the average. Moreover, the constant hidden in the O-notation is small. Therefore, we trade this for the (rare) worst case behavior of Θ(n2).

Java Code :
void quicksort(int[] a, int low, int high) {
   if (low > high)
return;
  int i = low;
int j = high;
int h;
// comparision element
int x = a[(low + high) / 2];
// partition
do {
while (a[i] < x)
i++;
while (a[j] > x)
j--;
if (i <= j) {
h = a[i];
a[i] = a[j];
a[j] = h;
i++;
j--;
}
} while (i <= j);
quicksort(a, low, j);
quicksort(a, i, high);
}


Wednesday, January 26, 2011

Selection Sort

Refer for animated applets :http://www.cse.iitk.ac.in/users/dsrkg/cs210/applets/sortingII/heapSort/heapSort.html

Algorithm :
for i = 1:n,
k = i
for j = i+1:n, if a[j] < a[k], k = j
→ invariant: a[k] smallest of a[i..n]
swap a[i,k]
→ invariant: a[1..i] in final position
end

java code :
void selectionSort(int[] a) {
int length = a.length;
for (int i = 0; i < length - 1; i++) {
int minPos = i;
for (int j = i + 1; j < length; j++) {
if (a[j] < a[minPos]) {
minPos = j;
}
}
// swapping min element position with i th position
int temp = a[minPos];
a[minPos] = a[i];
a[i] = temp;
}
}

Properties :
Not stable
O(1) extra space
Θ(n2) comparisons
Θ(n) swaps
Not adaptive


Selecting the lowest element requires scanning all n elements (this takes n − 1 comparisons) and then swapping it into the first position. Finding the next lowest element requires scanning the remaining n − 1 elements and so on, for (n − 1) + (n − 2) + ... + 2 + 1 = n(n − 1) / 2 ∈ Θ(n2) comparisons (see arithmetic progression). Each of these scans requires one swap for n − 1 elements (the final element is already in place).


Among simple average-case Θ(n2) algorithms, selection sort almost always outperforms bubble sort and gnome sort, but is generally outperformed by insertion sort. Insertion sort is very similar in that after the kth iteration, the first k elements in the array are in sorted order. Insertion sort's advantage is that it only scans as many elements as it needs in order to place the k + 1st element, while selection sort must scan all remaining elements to find the k + 1st element.

While selection sort is preferable to insertion sort in terms of number of writes (Θ(n) swaps versus Ο(n2) swaps), it almost always far exceeds (and never beats) the number of writes that cycle sort makes, as cycle sort is theoretically optimal in the number of writes. This can be important if writes are significantly more expensive than reads, such as with EEPROM or Flash memory, where every write lessens the lifespan of the memory.

Insertion Sort

java code :
void insertionSort(int[] a) {
for (int i = 1; i < a.length; i++) {
int key = a[i];
int pos = i - 1;
while (pos >= 0 && a[pos] > key) {
a[pos + 1] = a[pos];
pos--;
}
a[pos + 1] = key;
}
}

The best case input is an array that is already sorted. In this case insertion sort has a linear running time (i.e., Θ(n)). During each iteration, the first remaining element of the input is only compared with the right-most element of the sorted subsection of the array.
The worst case input is an array sorted in reverse order. In this case every iteration of the inner loop will scan and shift the entire sorted subsection of the array before inserting the next element. For this case insertion sort has a quadratic running time (i.e., O(n2)).
The average case is also quadratic, which makes insertion sort impractical for sorting large arrays. However, insertion sort is one of the fastest algorithms for sorting very small arrays, even faster than quick sort; indeed, good quick sort implementations use insertion sort for arrays smaller than a certain threshold, also when arising as subproblems; the exact threshold must be determined experimentally and depends on the machine, but is commonly around ten.

It is possible to find the inserting position of element ai faster, namely by binary search. However, moving the elements to the right in order to make room for the element to be inserted takes linear time anyway.

Bubble sort

Algorithm

Compare each pair of adjacent elements from the beginning of an array and, if they are in reversed order, swap them.
If at least one swap has been done, repeat step 1.
You can imagine that on every step big bubbles float to the surface and stay there. At the step, when no bubble moves, sorting stops. Let us see an example of sorting an array to make the idea of bubble sort clearer.

java code :

public static void bubbleSort(int[] data)
{
int length = data.length;
for (int k = 0; k < length - 1; k++)
{
boolean isSorted = true;

for (int i = 1; i < length - k; i++)
{
if (data[i] < data[i - 1])
{
int tempVariable = data[i];
data[i] = data[i - 1];
data[i - 1] = tempVariable;

isSorted = false;

}
}

if (isSorted)
break;
}
}


Properties :
Stable
O(1) extra space
O(n2) comparisons and swaps
Adaptive: O(n) when nearly sorted

Tuesday, January 25, 2011

ArrayList,Linked List and Vector

ArrayList is in most cases what you want to use. It is a list backed by an array, which means it has fast access to each element via the get method.
Vector is a leftover from the early days of Java, retrofitted with the List interface. The chief difference from ArrayList is that its methods are synchronized (ArrayList's are not). That means it is easier to use in multi-threaded environments, but it does incur the synchronization overhead.
LinkedList is backed by a doubly-linked list, not an array. That means it's fast to access elements at the start and the end of the list, but less so in the middle. On the other hand, inserting and deleting elements is fast compared to ArrayList.


An ArrayList is a List implementation backed by a Java array, similar to the Vector class. As the number of elements in the collection increases, the internal array grows to fit them. If there are lots of growth periods, performance degrades as the old array needs to be copied into the new array. However, random access is very quick as it uses an array index to access.
With a LinkedList, the List implementation is backed by a doubly linked list data structure, allowing easy inserts/deletions anywhere in the structure, but really slow random accesses as the access must start at an end to get to the specific position.


A LinkedList is similar to an ArrayList in that it is ordered by index position, but it differs in that the elements are double-linked to one another. This linkage gives you new mehtods for adding and removing from the beginning or end.

If your program frequently provides random access to the data of the list, the ArrayList class offers quick access to individual elements of the list. This quick access comes at a cost of slower operations for adding and removing in the middle of the list. If this latter behavior is what you desire, than the LinkedList class offers a better alternative. It provides quick sequential access, additions, and deletes, at a cost of slower random access.

Run time polymorphism in java

object that it is acting upon. In other words, polymorphism allows you define one interface and have multiple implementation. This is one of the basic principles of object oriented programming.

The method overriding is an example of runtime polymorphism. You can have a method in subclass overrides the method in its super classes with the same name and signature. Java virtual machine determines the proper method to call at the runtime, not at the compile time.

Let's take a look at the following example:

class veicle{
void whoAmI() {
System.out.println("I am a generic veicle.");
}
}
class carextends veicle{
void whoAmI() {
System.out.println("I am a car.");
}
}
class bus extends veicle{
void whoAmI() {
System.out.println("I am a bus.");
}
}

class RuntimePolymorphismDemo {

public static void main(String[] args) {
veicle ref1 = new veicle();
veicle ref2 = new car();
veicle ref3 = new bus();

ref1.whoAmI();
ref2.whoAmI();
ref3.whoAmI();

}
}

The output is

I am a generic veicle.
I am a car.
I am a bus.

String lterals

There are two ways to create a String object in Java:
Using the new operator.
For example,  String str = new String("Hello");.
Using a string literal or constant expression). For example,
String str="Hello"; (string literal) or
String str="Hel" + "lo"; (string constant expression).
What is difference between these String's creations? In Java, the equals method can be considered to perform a deep comparison of the value of an object, whereas the == operator performs a shallow comparison. The equals method compares the content of two objects rather than two objects' references. The == operator with reference types (i.e., Objects) evaluates as true if the references are identical - point to the same object. With value types (i.e., primitives) it evaluates as true if the value is identical. The equals method is to return true if two objects have identical content - however, the equals method in the java.lang.Object class - the default equals method if a class does not override it - returns true only if both references point to the same object.
Let's use the following example to see what difference between these creations of string:
The creation of two strings with the same sequence of letters without the use of the new keyword will create pointers to the same String in the Java String literal pool. The String literal pool is a way Java conserves resources.
public class DemoStringCreation {

public static void main(String args[]) {
String str1 = "Hello";
String str2 = "Hello";
System.out.println("str1 and str2 are created by using string literal.");
System.out.println(" str1 == str2 is " + (str1 == str2));// str1 == str2 is true
System.out.println(" str1.equals(str2) is " + str1.equals(str2));// str1.equals(str2) is true

String str3 = new String("Hello");
String str4 = new String("Hello");
System.out.println("str3 and str4 are created by using new operator.");
System.out.println(" str3 == str4 is " + (str3 == str4));// str3 == str4 is false
System.out.println(" str3.equals(str4) is " + str3.equals(str4));// str3.equals(str4) is true

String str5 = "Hel" + "lo";
String str6 = "He" + "llo";
System.out.println("str5 and str6 are created by using string constant expression.");
System.out.println(" str5 == str6 is " + (str5 == str6));// str5 == str6 is true
System.out.println(" str5.equals(str6) is " + str5.equals(str6));// str5.equals(str6) is true

String s = "lo";
String str7 = "Hel" + s;
String str8 = "He" + "llo";
System.out.println("str7 is computed at runtime.");
System.out.println("str8 is created by using string constant expression.");
System.out.println(" str7 == str8 is " + (str7 == str8));// str7 == str8 is false
System.out.println(" str7.equals(str8) is " + str7.equals(str8));// str7.equals(str8) is true
}
}
String Literal Pool :
String allocation, like all object allocation, proves costly in both time and memory. The JVM performs some trickery while instantiating string literals to increase performance and decrease memory overhead. To cut down the number of String objects created in the JVM, the String class keeps a pool of strings. Each time your code create a string literal, the JVM checks the string literal pool first. If the string already exists in the pool, a reference to the pooled instance returns. If the string does not exist in the pool, a new String object instantiates, then is placed in the pool. Java can make this optimization since strings are immutable and can be shared without fear of data corruption. For example
public class Program {
public static void main(String[] args) {
String str1 = "Hello";
String str2 = "Hello";
System.out.print(str1 == str2);//true
}
}
Unfortunately, when you use
String a=new String("Hello");
a String object is created out of the String literal pool, even if an equal string already exists in the pool. Considering all that, avoid new String unless you specifically know that you need it! For example
public class Program {
public static void main(String[] args) {
String str1 = "Hello";
String str2 = new String("Hello");
System.out.print(str1 == str2 + " ");//false
System.out.print(str1.equals(str2));//true
}
}
A JVM has a string pool where it keeps at most one object of any String. String literals always refer to an object in the string pool. String objects created with the new operator do not refer to objects in the string pool but can be made to using String's intern() method. The java.lang.String.intern() returns an interned String, that is, one that has an entry in the global String pool. If the String is not already in the global String pool, then it will be added. For example
public class Program {
public static void main(String[] args) {
// Create three strings in three different ways.
String s1 = "Hello";
String s2 = new StringBuffer("He").append("llo").toString();
String s3 = s2.intern();

// Determine which strings are equivalent using the == operator
System.out.println("s1 == s2 ? " + (s1 == s2));//s1 == s2? false
System.out.println("s1 == s3 ? " + (s1 == s3));//s1 == s3? true
}
}
There is a table always maintaining a single reference to each unique String object in the global string literal pool ever created by an instance of the runtime in order to optimize space. That means that they always have a reference to String objects in string literal pool, therefore, the string objects in the string literal pool not eligible for garbage collection.

String Literals in the Java Language Specification Third Edition
Each string literal is a reference to an instance of class String. String objects have a constant value. String literals-or, more generally, strings that are the values of constant expressions-are "interned" so as to share unique instances, using the method String.intern.
Thus, the test program consisting of the compilation unit:
class Other {
static String hello = "Hello";
}

class Test {
public static void main(String[] args) {
String hello = "Hello", lo = "lo";
System.out.println((hello == "Hello") + " ");//true
System.out.println((Other.hello == hello) + " ");//true
System.out.println((Other.hello == hello) + " ");//true
System.out.println((hello == ("Hel" + "lo")) + " ");//true
System.out.println((hello == ("Hel" + lo)) + " ");//false
System.out.println(hello == ("Hel" + lo).intern()); //true
}
}
This example illustrates six points:
Literal strings within the same class in the same package represent references to the same String object.
Literal strings within different classes in the same package represent references to the same String object.
Literal strings within different classes in different packages likewise represent references to the same String object.
Strings computed by constant expressions are computed at compile time and then treated as if they were literals.
Strings computed by concatenation at run time are newly created and therefore distinct.
The result of explicitly interning a computed string is the same string as any pre-existing literal string with the same contents.
Refer : http://www.xyzws.com/javafaq/what-is-string-literal-pool/3