Borland Visibroker
Borland Visibroker
==================
LM
- Borland Visibroker for C++
TechRx
- Borland Visibroker
Training
========
** http://java.sun.com/developer/onlineTraining/corba/corba.html
Imp Points :
1. Interfaces
2. Orb functions
3. Corba services
4. Object References and Requests
5. Exceptions
6.
7. Server Side
8. Object Adapters
* CORBA -- Common Object Request Broker Architecture
* It allows a distributed, heterogeneous collection of objects to interoperate.
The basic CORBA paradigm is that of a request for services of a distributed
object. Everything else defined by the OMG is in terms of this basic paradigm.
The services that an object provides are given by its interface. Interfaces are
defined in OMG's Interface Definition Language (IDL). Distributed objects are
identified by object references, which are typed by IDL interfaces.
A client holds an object reference to a distributed object. The object
reference is typed by an interface. The Object Request Broker, or ORB, delivers
the request to the object and returns any results to the client.
The ORB
The ORB is the distributed service that implements the request to the remote
object. It locates the remote object on the network, communicates the request
to the object, waits for the results and when available communicates those
results back to the client.
The ORB implements location transparency. Exactly the same request mechanism is
used by the client and the CORBA object regardless of where the object is
located. It might be in the same process with the client, down the hall or
across the planet. The client cannot tell the difference.
The ORB implements programming language independence for the request. The
client issuing the request can be written in a different programming language
from the implementation of the CORBA object. The ORB does the necessary
translation between programming languages. Language bindings are defined for
all popular programming languages.
CORBA
=====
One of the goals of the CORBA specification is that clients and object
implementations are portable. The CORBA specification defines an application
programmer's interface (API) for clients of a distributed object as well as an
API for the implementation of a CORBA object. This means that code written for
one vendor's CORBA product could, with a minimum of effort, be rewritten to
work with a different vendor's product. However, the reality of CORBA products
on the market today is that CORBA clients are portable but object
implementations need some rework to port from one CORBA product to another.
CORBA 2.0 added interoperability as a goal in the specification. In particular,
CORBA 2.0 defines a network protocol, called IIOP (Internet Inter-ORB
Protocol), that allows clients using a CORBA product from any vendor to
communicate with objects using a CORBA product from any other vendor. IIOP
works across the Internet, or more precisely, across any TCP/IP implementation.
Interoperability is more important in a distributed system than portability.
IIOP is used in other systems that do not even attempt to provide the CORBA
API. In particular, IIOP is used as the transport protocol for a version of
Java RMI (so called "RMI over IIOP"). Since EJB is defined in terms of RMI, it
too can use IIOP. Various application servers available on the market use IIOP
but do not expose the entire CORBA API. Because they all use IIOP, programs
written to these different API's can interoperate with each other and with
programs written to the CORBA API.
CORBA Services
==============
Another important part of the CORBA standard is the definition of a set of
distributed services to support the integration and interoperation of
distributed objects. As depicted in the graphic below, the services, known as
CORBA Services or COS, are defined on top of the ORB. That is, they are defined
as standard CORBA objects with IDL interfaces, sometimes referred to as "Object
Services."
1. Object life cycle: Defines how CORBA objects are created, removed, moved, and copied
2. Naming: Defines how CORBA objects can have friendly symbolic names
3. Events: Decouples the communication between distributed objects
4. Relationships: Provides arbitrary typed n-ary relationships between CORBA objects
5. Externalization: Coordinates the transformation of CORBA objects to and from external media
6. Transactions: Coordinates atomic access to CORBA objects
7. Concurrency Control: Provides a locking service for CORBA objects in order to ensure serializable access
8. Property: Supports the association of name-value pairs with CORBA objects
9. Trader: Supports the finding of CORBA objects based on properties describing the service offered by the object
10. Query: Supports queries on objects
IDL
===
* Module
* Interface
* Exceptions
* Methods: Method params are marked in, out or inout
IDL declarations are compiled with an IDL compiler and converted to their
associated representations in the target programming languages according to the
standard language binding.
Object References and Requests
==============================
Clients issue a request on a CORBA object using an object reference. An object
reference identifies the distributed object that will receive the request.
Here's a Java programming language code fragment that obtains a Stock object
reference and then it uses it to obtain the current price of the stock. Note
that the code fragment does not directly use CORBA types; instead it uses the
Java types that have been produced by the IDL to Java compiler.
// Client Code
Stock theStock = ...
try {
Quote current_quote =
theStock.get_quote();
} catch (Throwable e) {
}
Object references can be passed around the distributed object system, i.e. as
parameters to operations and returned as results of requests. For example,
notice that the StockFactory interface defines a create() operation that
returns an instance of a Stock. Here's a Java client code fragment that issues
a request on the factory object and receives the resulting stock object
reference.
StockFactory factory = ...
Stock theStock = ...
try {
theStock = factory.create(
"GII",
"Global Industries Inc.");
} catch (Throwable e) {
}
Note that issuing a request on a CORBA object is not all that different from
issuing a request on a Java object in a local program. The main difference is
that the CORBA objects can be anywhere. The CORBA system provides location
transparency, which implies that the client cannot tell if the request is to an
object in the same process, on the same machine, down the hall, or across the
planet.
Another difference from a local Java object is that the life time of the CORBA
object is not tied to the process in which the client executes, nor to the
process in which the CORBA object executes. Object references persist; they can
be saved as a string and recreated from a string.
The following Java code converts the Stock object reference to a string:
String stockString =
orb.object_to_string(theStock);
The string can be stored or communicated outside of the distributed object
system. Any client can convert the string back to an object reference and issue
a request on the distributed object.
This Java code converts the string back to a Stock object reference:
org.omg.CORBA.Object obj =
orb.string_to_object(stockString);
Stock theStock = StockHelper.narrow(obj);
Note that the resulting type of the string_to_object() method is Object, not
Stock. The second line narrows the type of the object reference from Object to
Stock. IDL supports a hierarchy of interfaces; the narrow() method call is an
operation on the hierarchy.
IDL Interfaces can build on top of each other.
All CORBA interfaces implicitly inherit the Object interface. They all support
the operations defined for Object. Inheritance of Object is implicit; there is
no need to declare it.
IDL Type Operations
====================
Given that IDL interfaces can be arranged in a hierarchy, a small number of
operations are defined on that hierarchy. The narrow() operation casts an
object reference to a more specific type:
org.omg.CORBA.Object obj = ...
Stock theStock = StockHelper.narrow(obj);
The is_a() operation, determines if an object reference supports a particular
interface:
if (obj._is_a(StockHelper.id()) ...
The id() operation defined on the helper class returns a repository id for the
interface. The repository id is a string representing the interface. For the
stock example, the repository id is:
IDL:StockObjects/Stock:1.0
Finally, it is possible to widen an object reference, that is cast it to a less
specific interface:
Stock theStock = theReportingStock;
There are no special operations to widen an object reference. It is
accomplished exactly as in the Java programming language.
The IDL compiler for Java programming language generates client-side stubs,
which represent the CORBA object locally in the Java programming language. The
generated code also represents in the Java programming language all of the IDL
interfaces and data types used to issue requests. The client code thus depends
on the generated Java code.
IDL Java C++
---- ----- -----
module package namespace
interface interface abstract class
operation method member function
attribute pair of methods pair of functions
exception exception exception
CORBA products provide an IDL compiler that converts IDL into the Java
programming language. The IDL compiler available for the Java 2 SDK is called
idltojava. The IDL compiler that comes with VisiBroker for Java is called
idl2java.
Exceptions
----------
There are two types of CORBA exceptions, System Exceptions and User Exceptions.
System Exceptions are thrown when something goes wrong with the system--for
instance, if you request a method that doesn't exist on the server, if there's
a communication problem, or if the ORB hasn't been initialized correctly. The
Java class SystemException extends RuntimeException, so the compiler won't
complain if you forget to catch them. You need to explicitly wrap your CORBA
calls in try...catch blocks in order to recover gracefully from System
Exceptions.
CORBA System Exceptions can contain "minor codes" which may provide additional
information about what went wrong. Unfortunately, these are vendor-specific, so
you need to tailor your error recovery routines to the ORB you're using.
User Exceptions are generated if something goes wrong inside the execution of
the remote method itself. These are declared inside the IDL definition for the
object, and are automatically generated by the idltojava compiler. In the stock
example, Unknown is a user exception.
Since User Exceptions are subclasses of java.lang.Exception, the compiler will
complain if you forget to trap them (and this is as it should be).
Providing an Implementation
===========================
Recall that given an IDL file, the IDL compiler generates various files for a
CORBA client. In addition to the files generated for a client, it also
generates a skeleton class for the object implementation. A skeleton is the
entry point into the distributed object. It unmarshals the incoming data, calls
the method implementing the operation being requested, and returns the
marshaled results. The object developer need only compile the skeleton and not
be concerned with the insides of it. The object developer can focus on
providing the implementation of the IDL interface.
To implement a CORBA object in the Java programming language, the developer
simply implements a Java class that extends the generated skeleton class and
provides a method for each operation in the interface. In the example, the IDL
compiler generates the skeleton class _StockImplBase for the Stock interface. A
possible implementation of the Stock interface is:
public class StockImpl extends
StockObjects._StockImplBase {
private Quote _quote=null;
private String _description=null;
public StockImpl(
String name, String description) {
super();
_description = description;
}
public Quote get_quote() throws Unknown {
if (_quote==null) throw new Unknown();
return _quote;
}
public void set_quote(Quote quote) {
_quote = quote;
}
public String description() {
return _description;
}
}
Implementation Type Checking
=============================
Just as type checking is done at the client for the request to a distributed
object, type checking is also done for the object implementation.
The IDL compiler for the Java programming language generates object skeletons
and Java code to represent all of the IDL interfaces and data types used in the
interface definition. The implementation code thus depends on the generated
Java code.
Server Side
============
A server that will run with the Java 2 ORB needs to do the following:
* Define a main method
* Initialize the ORB
* Instantiate at least one object
* Connect each object to the orb
* Wait for requests
The server must instantiate at least one object since objects are the only way
to offer services in CORBA systems.
Here's an implementation of the stock objects server. This code depends on the Java 2 ORB.
public class theServer {
public static void main(String[] args) {
try {
// Initialize the ORB.
org.omg.CORBA.ORB orb =
org.omg.CORBA.ORB.init(args,null);
// Create a stock object.
StockImpl theStock =
new StockImpl("GII",
"Global Industries Inc.");
// Let the ORB know about the object
orb.connect(theStock);
// Write stringified object
//reference to a file
PrintWriter out =
new PrintWriter(new BufferedWriter(
new FileWriter(args[0])));
out.println(
orb.object_to_string(theStock) );
out.close();
// wait for invocations from clients
java.lang.Object sync =
new java.lang.Object();
synchronized (sync) {
sync.wait();
}
} catch (Exception e) {
System.err.println(
"Stock server error: " + e);
e.printStackTrace(System.out);
}
}
}
Notice that the server does a new on the StockImpl class implementing the Stock
interface and then passes it to the ORB using the connect() call, indicating
that the object is ready to accept requests. Finally, the server waits for
requests.
Object Adapters
===============
The CORBA specification defines the concept of an object adapter. An object
adapter is a framework for implementing CORBA objects. It provides an API that
object implementations use for various low level services. According to the
CORBA specification, an object adapter is responsible for the following
functions:
* Generation and interpretation of object references
* Method invocation
* Security of interactions
* Object and implementation activation and deactivation
* Mapping object references to the corresponding object implementations
* Registration of implementations
The architecture supports the definition of many kinds of object adapters. The
specification includes the definition of the basic object adapter (BOA). In the
previous section, you saw some server code that uses the services of
VisiBroker's implementation of the BOA. The BOA has been implemented in various
CORBA products. Unfortunately, since the specification of the BOA was not
complete, the various BOA implementations differ in some significant ways. This
has compromised server portability.
To address this shortcoming, an entirely new object adapter was added, the
portable object adapter (POA). Unfortunately, the POA is not yet supported in
many products. In any event, the BOA and the POA are described here.
Activation on Demand by the Basic Object Adapter (BOA)
One of the main tasks of the BOA is to support on-demand object activation.
When a client issues a request, the BOA determines if the object is currently
running and if so, it delivers the request to the object. If the object is not
running, the BOA activates the object and then delivers the request.
The BOA defines four different models for object activation:
1. Shared server: Multiple active objects share the same server. The server
services requests from multiple clients. The server remains active until it is
deactivated or exits.
2. Unshared server: Only one object is active in the server. The server exits
when the client that caused its activation exits.
3. Server-per-method: Each request results in the creation of a server. The
server exits when the method completes.
4. Persistent server: The server is started by an entity other than the BOA
(you, operating services, etc.). Multiple active objects share the server.
Portable Object Adapter (POA)
=============================
According to the specification, "The intent of the POA, as its name suggests,
is to provide an object adapter that can be used with multiple ORB
implementations with a minimum of rewriting needed to deal with different
vendors' implementations." However, most CORBA products do not yet support the
POA.
The POA is also intended to allow persistent objects -- at least, from the
client's perspective. That is, as far as the client is concerned, these objects
are always alive, and maintain data values stored in them, even though
physically, the server may have been restarted many times, or the
implementation may be provided by many different object implementations.
The POA allows the object implementor a lot more control. Previously, the
implementation of the object was responsible only for the code that is executed
in response to method requests. Now, additionally, the implementor has more
control over the object's identity, state, storage, and lifecycle.
The POA has support for many other features, including the following:
* Transparent object activation
* Multiple simultaneous object identities
* Transient objects
* Object ID namespaces
* Policies including multithreading, security, and object management
* Multiple distinct POAs in a single server with different policies and namespaces
A word on multithreading. Each POA has a threading policy that determines how
that particular POA instance will deal with multiple simultaneous requests. In
the single thread model, all requests are processed one at a time. The
underlying object implementations can therefore be lazy and thread-unsafe. Of
course, this can lead to performance problems. In the alternate ORB-controlled
model, the ORB is responsible for creating and allocating threads and sending
requests in to the object implementations efficiently. The programmer doesn't
need to worry about thread management issues; however, the programmer
definitely has to make sure the objects are all thread-safe.
Tuesday, October 12, 2004
Thursday, October 07, 2004
==================
LM
- Borland Visibroker for C++
TechRx
- Borland Visibroker
Training
========
** http://java.sun.com/developer/onlineTraining/corba/corba.html
Imp Points :
1. Interfaces
2. Orb functions
3. Corba services
4. Object References and Requests
5. Exceptions
6.
7. Server Side
8. Object Adapters
* CORBA -- Common Object Request Broker Architecture
* It allows a distributed, heterogeneous collection of objects to interoperate.
The basic CORBA paradigm is that of a request for services of a distributed
object. Everything else defined by the OMG is in terms of this basic paradigm.
The services that an object provides are given by its interface. Interfaces are
defined in OMG's Interface Definition Language (IDL). Distributed objects are
identified by object references, which are typed by IDL interfaces.
A client holds an object reference to a distributed object. The object
reference is typed by an interface. The Object Request Broker, or ORB, delivers
the request to the object and returns any results to the client.
The ORB
The ORB is the distributed service that implements the request to the remote
object. It locates the remote object on the network, communicates the request
to the object, waits for the results and when available communicates those
results back to the client.
The ORB implements location transparency. Exactly the same request mechanism is
used by the client and the CORBA object regardless of where the object is
located. It might be in the same process with the client, down the hall or
across the planet. The client cannot tell the difference.
The ORB implements programming language independence for the request. The
client issuing the request can be written in a different programming language
from the implementation of the CORBA object. The ORB does the necessary
translation between programming languages. Language bindings are defined for
all popular programming languages.
CORBA
=====
One of the goals of the CORBA specification is that clients and object
implementations are portable. The CORBA specification defines an application
programmer's interface (API) for clients of a distributed object as well as an
API for the implementation of a CORBA object. This means that code written for
one vendor's CORBA product could, with a minimum of effort, be rewritten to
work with a different vendor's product. However, the reality of CORBA products
on the market today is that CORBA clients are portable but object
implementations need some rework to port from one CORBA product to another.
CORBA 2.0 added interoperability as a goal in the specification. In particular,
CORBA 2.0 defines a network protocol, called IIOP (Internet Inter-ORB
Protocol), that allows clients using a CORBA product from any vendor to
communicate with objects using a CORBA product from any other vendor. IIOP
works across the Internet, or more precisely, across any TCP/IP implementation.
Interoperability is more important in a distributed system than portability.
IIOP is used in other systems that do not even attempt to provide the CORBA
API. In particular, IIOP is used as the transport protocol for a version of
Java RMI (so called "RMI over IIOP"). Since EJB is defined in terms of RMI, it
too can use IIOP. Various application servers available on the market use IIOP
but do not expose the entire CORBA API. Because they all use IIOP, programs
written to these different API's can interoperate with each other and with
programs written to the CORBA API.
CORBA Services
==============
Another important part of the CORBA standard is the definition of a set of
distributed services to support the integration and interoperation of
distributed objects. As depicted in the graphic below, the services, known as
CORBA Services or COS, are defined on top of the ORB. That is, they are defined
as standard CORBA objects with IDL interfaces, sometimes referred to as "Object
Services."
1. Object life cycle: Defines how CORBA objects are created, removed, moved, and copied
2. Naming: Defines how CORBA objects can have friendly symbolic names
3. Events: Decouples the communication between distributed objects
4. Relationships: Provides arbitrary typed n-ary relationships between CORBA objects
5. Externalization: Coordinates the transformation of CORBA objects to and from external media
6. Transactions: Coordinates atomic access to CORBA objects
7. Concurrency Control: Provides a locking service for CORBA objects in order to ensure serializable access
8. Property: Supports the association of name-value pairs with CORBA objects
9. Trader: Supports the finding of CORBA objects based on properties describing the service offered by the object
10. Query: Supports queries on objects
IDL
===
* Module
* Interface
* Exceptions
* Methods: Method params are marked in, out or inout
IDL declarations are compiled with an IDL compiler and converted to their
associated representations in the target programming languages according to the
standard language binding.
Object References and Requests
==============================
Clients issue a request on a CORBA object using an object reference. An object
reference identifies the distributed object that will receive the request.
Here's a Java programming language code fragment that obtains a Stock object
reference and then it uses it to obtain the current price of the stock. Note
that the code fragment does not directly use CORBA types; instead it uses the
Java types that have been produced by the IDL to Java compiler.
// Client Code
Stock theStock = ...
try {
Quote current_quote =
theStock.get_quote();
} catch (Throwable e) {
}
Object references can be passed around the distributed object system, i.e. as
parameters to operations and returned as results of requests. For example,
notice that the StockFactory interface defines a create() operation that
returns an instance of a Stock. Here's a Java client code fragment that issues
a request on the factory object and receives the resulting stock object
reference.
StockFactory factory = ...
Stock theStock = ...
try {
theStock = factory.create(
"GII",
"Global Industries Inc.");
} catch (Throwable e) {
}
Note that issuing a request on a CORBA object is not all that different from
issuing a request on a Java object in a local program. The main difference is
that the CORBA objects can be anywhere. The CORBA system provides location
transparency, which implies that the client cannot tell if the request is to an
object in the same process, on the same machine, down the hall, or across the
planet.
Another difference from a local Java object is that the life time of the CORBA
object is not tied to the process in which the client executes, nor to the
process in which the CORBA object executes. Object references persist; they can
be saved as a string and recreated from a string.
The following Java code converts the Stock object reference to a string:
String stockString =
orb.object_to_string(theStock);
The string can be stored or communicated outside of the distributed object
system. Any client can convert the string back to an object reference and issue
a request on the distributed object.
This Java code converts the string back to a Stock object reference:
org.omg.CORBA.Object obj =
orb.string_to_object(stockString);
Stock theStock = StockHelper.narrow(obj);
Note that the resulting type of the string_to_object() method is Object, not
Stock. The second line narrows the type of the object reference from Object to
Stock. IDL supports a hierarchy of interfaces; the narrow() method call is an
operation on the hierarchy.
IDL Interfaces can build on top of each other.
All CORBA interfaces implicitly inherit the Object interface. They all support
the operations defined for Object. Inheritance of Object is implicit; there is
no need to declare it.
IDL Type Operations
====================
Given that IDL interfaces can be arranged in a hierarchy, a small number of
operations are defined on that hierarchy. The narrow() operation casts an
object reference to a more specific type:
org.omg.CORBA.Object obj = ...
Stock theStock = StockHelper.narrow(obj);
The is_a() operation, determines if an object reference supports a particular
interface:
if (obj._is_a(StockHelper.id()) ...
The id() operation defined on the helper class returns a repository id for the
interface. The repository id is a string representing the interface. For the
stock example, the repository id is:
IDL:StockObjects/Stock:1.0
Finally, it is possible to widen an object reference, that is cast it to a less
specific interface:
Stock theStock = theReportingStock;
There are no special operations to widen an object reference. It is
accomplished exactly as in the Java programming language.
The IDL compiler for Java programming language generates client-side stubs,
which represent the CORBA object locally in the Java programming language. The
generated code also represents in the Java programming language all of the IDL
interfaces and data types used to issue requests. The client code thus depends
on the generated Java code.
IDL Java C++
---- ----- -----
module package namespace
interface interface abstract class
operation method member function
attribute pair of methods pair of functions
exception exception exception
CORBA products provide an IDL compiler that converts IDL into the Java
programming language. The IDL compiler available for the Java 2 SDK is called
idltojava. The IDL compiler that comes with VisiBroker for Java is called
idl2java.
Exceptions
----------
There are two types of CORBA exceptions, System Exceptions and User Exceptions.
System Exceptions are thrown when something goes wrong with the system--for
instance, if you request a method that doesn't exist on the server, if there's
a communication problem, or if the ORB hasn't been initialized correctly. The
Java class SystemException extends RuntimeException, so the compiler won't
complain if you forget to catch them. You need to explicitly wrap your CORBA
calls in try...catch blocks in order to recover gracefully from System
Exceptions.
CORBA System Exceptions can contain "minor codes" which may provide additional
information about what went wrong. Unfortunately, these are vendor-specific, so
you need to tailor your error recovery routines to the ORB you're using.
User Exceptions are generated if something goes wrong inside the execution of
the remote method itself. These are declared inside the IDL definition for the
object, and are automatically generated by the idltojava compiler. In the stock
example, Unknown is a user exception.
Since User Exceptions are subclasses of java.lang.Exception, the compiler will
complain if you forget to trap them (and this is as it should be).
Providing an Implementation
===========================
Recall that given an IDL file, the IDL compiler generates various files for a
CORBA client. In addition to the files generated for a client, it also
generates a skeleton class for the object implementation. A skeleton is the
entry point into the distributed object. It unmarshals the incoming data, calls
the method implementing the operation being requested, and returns the
marshaled results. The object developer need only compile the skeleton and not
be concerned with the insides of it. The object developer can focus on
providing the implementation of the IDL interface.
To implement a CORBA object in the Java programming language, the developer
simply implements a Java class that extends the generated skeleton class and
provides a method for each operation in the interface. In the example, the IDL
compiler generates the skeleton class _StockImplBase for the Stock interface. A
possible implementation of the Stock interface is:
public class StockImpl extends
StockObjects._StockImplBase {
private Quote _quote=null;
private String _description=null;
public StockImpl(
String name, String description) {
super();
_description = description;
}
public Quote get_quote() throws Unknown {
if (_quote==null) throw new Unknown();
return _quote;
}
public void set_quote(Quote quote) {
_quote = quote;
}
public String description() {
return _description;
}
}
Implementation Type Checking
=============================
Just as type checking is done at the client for the request to a distributed
object, type checking is also done for the object implementation.
The IDL compiler for the Java programming language generates object skeletons
and Java code to represent all of the IDL interfaces and data types used in the
interface definition. The implementation code thus depends on the generated
Java code.
Server Side
============
A server that will run with the Java 2 ORB needs to do the following:
* Define a main method
* Initialize the ORB
* Instantiate at least one object
* Connect each object to the orb
* Wait for requests
The server must instantiate at least one object since objects are the only way
to offer services in CORBA systems.
Here's an implementation of the stock objects server. This code depends on the Java 2 ORB.
public class theServer {
public static void main(String[] args) {
try {
// Initialize the ORB.
org.omg.CORBA.ORB orb =
org.omg.CORBA.ORB.init(args,null);
// Create a stock object.
StockImpl theStock =
new StockImpl("GII",
"Global Industries Inc.");
// Let the ORB know about the object
orb.connect(theStock);
// Write stringified object
//reference to a file
PrintWriter out =
new PrintWriter(new BufferedWriter(
new FileWriter(args[0])));
out.println(
orb.object_to_string(theStock) );
out.close();
// wait for invocations from clients
java.lang.Object sync =
new java.lang.Object();
synchronized (sync) {
sync.wait();
}
} catch (Exception e) {
System.err.println(
"Stock server error: " + e);
e.printStackTrace(System.out);
}
}
}
Notice that the server does a new on the StockImpl class implementing the Stock
interface and then passes it to the ORB using the connect() call, indicating
that the object is ready to accept requests. Finally, the server waits for
requests.
Object Adapters
===============
The CORBA specification defines the concept of an object adapter. An object
adapter is a framework for implementing CORBA objects. It provides an API that
object implementations use for various low level services. According to the
CORBA specification, an object adapter is responsible for the following
functions:
* Generation and interpretation of object references
* Method invocation
* Security of interactions
* Object and implementation activation and deactivation
* Mapping object references to the corresponding object implementations
* Registration of implementations
The architecture supports the definition of many kinds of object adapters. The
specification includes the definition of the basic object adapter (BOA). In the
previous section, you saw some server code that uses the services of
VisiBroker's implementation of the BOA. The BOA has been implemented in various
CORBA products. Unfortunately, since the specification of the BOA was not
complete, the various BOA implementations differ in some significant ways. This
has compromised server portability.
To address this shortcoming, an entirely new object adapter was added, the
portable object adapter (POA). Unfortunately, the POA is not yet supported in
many products. In any event, the BOA and the POA are described here.
Activation on Demand by the Basic Object Adapter (BOA)
One of the main tasks of the BOA is to support on-demand object activation.
When a client issues a request, the BOA determines if the object is currently
running and if so, it delivers the request to the object. If the object is not
running, the BOA activates the object and then delivers the request.
The BOA defines four different models for object activation:
1. Shared server: Multiple active objects share the same server. The server
services requests from multiple clients. The server remains active until it is
deactivated or exits.
2. Unshared server: Only one object is active in the server. The server exits
when the client that caused its activation exits.
3. Server-per-method: Each request results in the creation of a server. The
server exits when the method completes.
4. Persistent server: The server is started by an entity other than the BOA
(you, operating services, etc.). Multiple active objects share the server.
Portable Object Adapter (POA)
=============================
According to the specification, "The intent of the POA, as its name suggests,
is to provide an object adapter that can be used with multiple ORB
implementations with a minimum of rewriting needed to deal with different
vendors' implementations." However, most CORBA products do not yet support the
POA.
The POA is also intended to allow persistent objects -- at least, from the
client's perspective. That is, as far as the client is concerned, these objects
are always alive, and maintain data values stored in them, even though
physically, the server may have been restarted many times, or the
implementation may be provided by many different object implementations.
The POA allows the object implementor a lot more control. Previously, the
implementation of the object was responsible only for the code that is executed
in response to method requests. Now, additionally, the implementor has more
control over the object's identity, state, storage, and lifecycle.
The POA has support for many other features, including the following:
* Transparent object activation
* Multiple simultaneous object identities
* Transient objects
* Object ID namespaces
* Policies including multithreading, security, and object management
* Multiple distinct POAs in a single server with different policies and namespaces
A word on multithreading. Each POA has a threading policy that determines how
that particular POA instance will deal with multiple simultaneous requests. In
the single thread model, all requests are processed one at a time. The
underlying object implementations can therefore be lazy and thread-unsafe. Of
course, this can lead to performance problems. In the alternate ORB-controlled
model, the ORB is responsible for creating and allocating threads and sending
requests in to the object implementations efficiently. The programmer doesn't
need to worry about thread management issues; however, the programmer
definitely has to make sure the objects are all thread-safe.
Building Tomcat from eclipse
The problem was that I was using a proxy for the net.
So changed 2 build files to account for that proxy.
My file downloading is going great.
Ant kicks big-time ass.
BTW building Tomcat is pretty time consuming.
It downloads something like 150 Mb's of libraries.
I'll explore those libraries also.
Damn cool!!!
Couldn't build it from Eclipse :((
Was able to build it using the raw ant.
Not that cool but anyways..
Wednesday, August 25, 2004
So changed 2 build files to account for that proxy.
My file downloading is going great.
Ant kicks big-time ass.
BTW building Tomcat is pretty time consuming.
It downloads something like 150 Mb's of libraries.
I'll explore those libraries also.
Damn cool!!!
Couldn't build it from Eclipse :((
Was able to build it using the raw ant.
Not that cool but anyways..
Bugs
Was applying a method on a null object.
Forgot to take into consideration that the object may be null.
Monday, June 07, 2004
Forgot to take into consideration that the object may be null.
Enterprise Application Intearaction
> Enterprise Application Intearaction
>
> .Integration through data
> JDBC, JDO, and other ways to access data
> Using XML for data exchange (JAXP)
> Message brokers (JMS)
>
> .Business method integration
> RMI-IIOP and Java IDL for CORBA integration
> Using EJBs for integration
> The J2EE Connector Architecture (JCA)
> COM bridges for Windows integration
> Transaction (JTA) and security (JAAS) management
>
> .Presentation integration
> Servlets and JSP pages for client integration
>
> .B2B integration
> XML technologies and vocabularies
> XML/XSLT for building user interfaces
> SOAP, UDDI, and WSDL
> E-Marketplaces and portals
Monday, May 31, 2004
>
> .Integration through data
> JDBC, JDO, and other ways to access data
> Using XML for data exchange (JAXP)
> Message brokers (JMS)
>
> .Business method integration
> RMI-IIOP and Java IDL for CORBA integration
> Using EJBs for integration
> The J2EE Connector Architecture (JCA)
> COM bridges for Windows integration
> Transaction (JTA) and security (JAAS) management
>
> .Presentation integration
> Servlets and JSP pages for client integration
>
> .B2B integration
> XML technologies and vocabularies
> XML/XSLT for building user interfaces
> SOAP, UDDI, and WSDL
> E-Marketplaces and portals
XQuery Intro
XQuery Intro
Declerative Vs Descriptive
This difference is often summarized by saying that query languages are
declarative (stating what you want), while programming languages are
descriptive (stating how you want it done). The difference is subtle, but
significant.
XPath 1.0 introduced a convenient syntax for addressing parts of an XML
document. If you need to select a node out of an existing XML document or
database, XPath is the perfect choice, and XQuery doesn't change that.
XSLT 1.0 (which was developed at the same time as XPath) takes XML querying
a step further, including XPath 1.0 as a subset to address parts of an XML
document and then adding many other features. XSLT is fantastic for
recursively processing an XML document or translating XML into HTML and
text. XSLT can create new XML or (copy) part of existing nodes, and it can
introduce variables and namespaces.
Finally, XSLT 1.0 encourages and often requires users to solve problems in
unnatural ways. XSLT is inherently recursive, but most programmers today
think procedurally; we think of calling functions directly ourselves, not
having functions called for us in an event-driven fashion whenever a match
occurs. Many people write large XSLT queries using only a single
rule, apparently unaware that XSLT's recursive matching
capabilities would cut their query size in half and make it much easier to
maintain.
XQuery also supports a really important feature that was purposely disabled
in XSLT 1.0, something commonly known as composition. Composition allows
users to construct temporary XML results in the middle of a query, and then
navigate into that. This is such an important feature that many vendors
added extension functions, such as nodeset() to XSLT 1.0, to support it
anyway; XQuery makes it a first-class operation
XQuery uses XML Schema 1.0 as the basis for its type system. Consequently,
these two standards share some terminology and definitions. XQuery also
provides some operators such as import schema and validate to support
working with XML schemas.
Every XQuery expression has a static type (compile-time) and a dynamic type
(run-time). The dynamic type applies to the actual value that results when
the expression is evaluated; the value is an instance of that dynamic type.
The static type applies to the expression itself, and can be used to perform
type checking during compilation. All XQuery implementations perform dynamic
type checking, but only some perform static type checking.
Every XQuery value is a sequence containing zero or more items. Each
individual item in a sequence is a singleton, and is the same as a sequence
of length one containing just that item. Consequently, sequences are never
nested.
Every singleton item in XQuery has a type derived from item(). The item()
type is similar to the object type in Java and C#, except that it is
abstract: you can't create an instance of item(). (It's written with
parentheses in part to avoid confusion with user-defined types with the same
name and in part to be consistent with the XPath node tests.)
items are classified into two kinds: XML nodes and atomic values. Nodes
derive from the type node(), and atomic values derive from
xdt:anyAtomicType. Like item(), the node() and xdt:anyAtomicType types are
abstract.
All of the atomic type names are in one of two namespaces: The XML Schema
type names are in the XML Schema namespace http://www.w3.org/2001/XMLSchema,
which is bound to the prefix xs. The XQuery type names are in the XQuery
type namespace http://www.w3.org/2003/11/xpath-datatypes, which is bound to
the prefix xdt. These prefixes are built in to XQuery.
Every XQuery expression evaluates to a sequence (a single item is equivalent
to a sequence of length one containing that item). Items in a sequence can
be atomic values or nodes. Collectively, these make up the XQuery Data
Model.
XQuery comments begin with the two characters (: and end with the two
characters :)
Every query begins with an optional section called the prolog. The prolog
sets up the compile-time context for the rest of the query, including things
like default namespaces, in-scope namespaces, user-defined functions,
imported schema types, and even external variables and functions (if the
implementation supports them). Each prolog statement must end with a
semicolon (;).
Each function definition starts with the keywords declare function, followed
by the name of the function, the names of its parameters (if any) and
optionally their types, optionally the return type of the function, and
finally the body of the function (enclosed in curly braces). Ex
declare function my:fact ($n as xs:integer) as xs:integer
{
if ( $n < 2 )
then 1
else
$n * my:fact($n - 1)
};
Queries may be divided into separate modules. Each module is a
self-contained unit, analogous to a file containing code. Modules are most
commonly used to define function libraries, which can then be shared by many
queries using the import module statement in the prolog. Note that not every
implementation supports modules.
XQuery expressions may be embedded in XML constructors
It is { true() or false() } that this is an example.
=>
It is true that this is an example.
Sequence content is flattened before inserting into XML
All of the built-in functions (except type constructors) belong to the
namespace http://www.w3.org/2003/11/xpath-functions, which is bound to the
prefix fn. This is also the default namespace for functions, which means
that unqualified function names are matched against the built-in functions.
For example, true() is the same as fn:true(), provided that you haven't
changed the default function namespace or the namespace binding for fn.
Operators
true() and false() => false()
true() or false() => true()
not(false()) => true()
if (expr < 0)
then "negative"
else if (expr > 0)
then "positive"
else "zero"
string-length("abcde") => 5
substring("abcde", 3) => "cde"
substring("abcde", 2, 3) => "bcd"
concat("ab", "cd", "", "e") => "abcde"
string-join(("ab","cd","","e"), "") => "abcde"
string-join(("ab","cd","","e"), "x") => "abxcdxxe"
contains("abcde", "e") => true
replace("abcde", "a.*d", "x") => "xe"
replace("abcde", "([ab][cd])+", "x") => "axde"
normalize-space(" a b cd e ") => "a b cd e"
1 eq 1 => true
1 eq 2 => false
1 ne 2 => true
1 gt 2 => false
1 lt 2 => true
Finally, there are three node comparison operators: <<, >>, and is. The node
comparison operators depend on node identity and document order. The is
operator returns true if two nodes are the same node by identity. The <<
operator is pronounced "before" and tests whether a node occurs before
another one in document order. Similarly, the >> operator is pronounced
"after" and tests whether a node occurs after another one in document order.
Variables in XQuery are written using a dollar sign symbol in front of a
name, like so: $variable. The variable name may consist of only a local-name
like this one, or it may be a qualified name consisting of a prefix and
local-name, like $prefix:local. In this case, it behaves like any other XML
qualified name. (The prefix must be bound to a namespace in scope, and it is
the namespace value that matters, not the prefix.)
The central expression in XQuery is the so-called "flower expression," named
after the first letters of its clauses-for, let, where, order by,
return-FLWOR.
A typical FLWOR expression
for $i in doc("orders.xml")//Customer
let $name := concat($i/@FirstName, $i/@LastName)
where $i/@ZipCode = 91126
order by $i/@LastName
return
{ $i//Order }
FLWOR can introduce variables into scope
FLWOR is also useful for filtering sequences
Sort employee names by last name
for $e in doc("team.xml")//Employee
let $name := $e/Name
order by tokenize($name)[2] (: Extract the last name :)
return $name
Joining two documents together
for $i in doc("one.xml")//fish,
$j in doc("two.xml")//fish
where $i/red = $j/blue
return { $i, $j }
XQuery distinguishes between static errors that may occur when compiling a
query and dynamic errors that may occur when evaluating a query. Dynamic
errors may be reported statically if they are detected during compilation
(for example, xs:decimal("X") may result in either a dynamic or a static
error, depending on the implementation).
Most XQuery expressions perform extensive type checking. For example, the
addition $a + $b results in an error if either $a or $b is a sequence
containing more than one item, or if the two values cannot be added
together. For example, "1" + 2 is an error. This is very different from
XPath and XSLT 1.0, in which "1" + 2 converted the string to a number, and
then performed the addition without error.
XQuery also defines a built-in error() function that takes an optional
argument (the error value) and raises a dynamic error. In addition, some
implementations support the trace() function, which allows you to generate a
message without terminating query execution. See Appendix C for examples.
Many other XQuery operations may cause dynamic errors, such as type
conversion errors. As mentioned previously, often implementations are
allowed to evaluate expressions in any order or to optimize out certain
temporary expressions. Consequently, an implementation may optimize out some
dynamic errors. For example, error() and false() might raise an error, or
might return false. The only expressions that guarantee a particular
order-of-evaluation are if/then/else and typeswitch.
Wednesday, May 26, 2004
Declerative Vs Descriptive
This difference is often summarized by saying that query languages are
declarative (stating what you want), while programming languages are
descriptive (stating how you want it done). The difference is subtle, but
significant.
XPath 1.0 introduced a convenient syntax for addressing parts of an XML
document. If you need to select a node out of an existing XML document or
database, XPath is the perfect choice, and XQuery doesn't change that.
XSLT 1.0 (which was developed at the same time as XPath) takes XML querying
a step further, including XPath 1.0 as a subset to address parts of an XML
document and then adding many other features. XSLT is fantastic for
recursively processing an XML document or translating XML into HTML and
text. XSLT can create new XML or (copy) part of existing nodes, and it can
introduce variables and namespaces.
Finally, XSLT 1.0 encourages and often requires users to solve problems in
unnatural ways. XSLT is inherently recursive, but most programmers today
think procedurally; we think of calling functions directly ourselves, not
having functions called for us in an event-driven fashion whenever a match
occurs. Many people write large XSLT queries using only a single
capabilities would cut their query size in half and make it much easier to
maintain.
XQuery also supports a really important feature that was purposely disabled
in XSLT 1.0, something commonly known as composition. Composition allows
users to construct temporary XML results in the middle of a query, and then
navigate into that. This is such an important feature that many vendors
added extension functions, such as nodeset() to XSLT 1.0, to support it
anyway; XQuery makes it a first-class operation
XQuery uses XML Schema 1.0 as the basis for its type system. Consequently,
these two standards share some terminology and definitions. XQuery also
provides some operators such as import schema and validate to support
working with XML schemas.
Every XQuery expression has a static type (compile-time) and a dynamic type
(run-time). The dynamic type applies to the actual value that results when
the expression is evaluated; the value is an instance of that dynamic type.
The static type applies to the expression itself, and can be used to perform
type checking during compilation. All XQuery implementations perform dynamic
type checking, but only some perform static type checking.
Every XQuery value is a sequence containing zero or more items. Each
individual item in a sequence is a singleton, and is the same as a sequence
of length one containing just that item. Consequently, sequences are never
nested.
Every singleton item in XQuery has a type derived from item(). The item()
type is similar to the object type in Java and C#, except that it is
abstract: you can't create an instance of item(). (It's written with
parentheses in part to avoid confusion with user-defined types with the same
name and in part to be consistent with the XPath node tests.)
items are classified into two kinds: XML nodes and atomic values. Nodes
derive from the type node(), and atomic values derive from
xdt:anyAtomicType. Like item(), the node() and xdt:anyAtomicType types are
abstract.
All of the atomic type names are in one of two namespaces: The XML Schema
type names are in the XML Schema namespace http://www.w3.org/2001/XMLSchema,
which is bound to the prefix xs. The XQuery type names are in the XQuery
type namespace http://www.w3.org/2003/11/xpath-datatypes, which is bound to
the prefix xdt. These prefixes are built in to XQuery.
Every XQuery expression evaluates to a sequence (a single item is equivalent
to a sequence of length one containing that item). Items in a sequence can
be atomic values or nodes. Collectively, these make up the XQuery Data
Model.
XQuery comments begin with the two characters (: and end with the two
characters :)
Every query begins with an optional section called the prolog. The prolog
sets up the compile-time context for the rest of the query, including things
like default namespaces, in-scope namespaces, user-defined functions,
imported schema types, and even external variables and functions (if the
implementation supports them). Each prolog statement must end with a
semicolon (;).
Each function definition starts with the keywords declare function, followed
by the name of the function, the names of its parameters (if any) and
optionally their types, optionally the return type of the function, and
finally the body of the function (enclosed in curly braces). Ex
declare function my:fact ($n as xs:integer) as xs:integer
{
if ( $n < 2 )
then 1
else
$n * my:fact($n - 1)
};
Queries may be divided into separate modules. Each module is a
self-contained unit, analogous to a file containing code. Modules are most
commonly used to define function libraries, which can then be shared by many
queries using the import module statement in the prolog. Note that not every
implementation supports modules.
XQuery expressions may be embedded in XML constructors
It is { true() or false() } that this is an example.
=>
Sequence content is flattened before inserting into XML
All of the built-in functions (except type constructors) belong to the
namespace http://www.w3.org/2003/11/xpath-functions, which is bound to the
prefix fn. This is also the default namespace for functions, which means
that unqualified function names are matched against the built-in functions.
For example, true() is the same as fn:true(), provided that you haven't
changed the default function namespace or the namespace binding for fn.
Operators
true() and false() => false()
true() or false() => true()
not(false()) => true()
if (expr < 0)
then "negative"
else if (expr > 0)
then "positive"
else "zero"
string-length("abcde") => 5
substring("abcde", 3) => "cde"
substring("abcde", 2, 3) => "bcd"
concat("ab", "cd", "", "e") => "abcde"
string-join(("ab","cd","","e"), "") => "abcde"
string-join(("ab","cd","","e"), "x") => "abxcdxxe"
contains("abcde", "e") => true
replace("abcde", "a.*d", "x") => "xe"
replace("abcde", "([ab][cd])+", "x") => "axde"
normalize-space(" a b cd e ") => "a b cd e"
1 eq 1 => true
1 eq 2 => false
1 ne 2 => true
1 gt 2 => false
1 lt 2 => true
Finally, there are three node comparison operators: <<, >>, and is. The node
comparison operators depend on node identity and document order. The is
operator returns true if two nodes are the same node by identity. The <<
operator is pronounced "before" and tests whether a node occurs before
another one in document order. Similarly, the >> operator is pronounced
"after" and tests whether a node occurs after another one in document order.
Variables in XQuery are written using a dollar sign symbol in front of a
name, like so: $variable. The variable name may consist of only a local-name
like this one, or it may be a qualified name consisting of a prefix and
local-name, like $prefix:local. In this case, it behaves like any other XML
qualified name. (The prefix must be bound to a namespace in scope, and it is
the namespace value that matters, not the prefix.)
The central expression in XQuery is the so-called "flower expression," named
after the first letters of its clauses-for, let, where, order by,
return-FLWOR.
A typical FLWOR expression
for $i in doc("orders.xml")//Customer
let $name := concat($i/@FirstName, $i/@LastName)
where $i/@ZipCode = 91126
order by $i/@LastName
return
{ $i//Order }
FLWOR can introduce variables into scope
FLWOR is also useful for filtering sequences
Sort employee names by last name
for $e in doc("team.xml")//Employee
let $name := $e/Name
order by tokenize($name)[2] (: Extract the last name :)
return $name
Joining two documents together
for $i in doc("one.xml")//fish,
$j in doc("two.xml")//fish
where $i/red = $j/blue
return
XQuery distinguishes between static errors that may occur when compiling a
query and dynamic errors that may occur when evaluating a query. Dynamic
errors may be reported statically if they are detected during compilation
(for example, xs:decimal("X") may result in either a dynamic or a static
error, depending on the implementation).
Most XQuery expressions perform extensive type checking. For example, the
addition $a + $b results in an error if either $a or $b is a sequence
containing more than one item, or if the two values cannot be added
together. For example, "1" + 2 is an error. This is very different from
XPath and XSLT 1.0, in which "1" + 2 converted the string to a number, and
then performed the addition without error.
XQuery also defines a built-in error() function that takes an optional
argument (the error value) and raises a dynamic error. In addition, some
implementations support the trace() function, which allows you to generate a
message without terminating query execution. See Appendix C for examples.
Many other XQuery operations may cause dynamic errors, such as type
conversion errors. As mentioned previously, often implementations are
allowed to evaluate expressions in any order or to optimize out certain
temporary expressions. Consequently, an implementation may optimize out some
dynamic errors. For example, error() and false() might raise an error, or
might return false. The only expressions that guarantee a particular
order-of-evaluation are if/then/else and typeswitch.
Connection Pooling in a Three-tier Environment
Connection Pooling in a Three-tier Environment
The following sequence of steps outlines what happens when a JDBC client
requests
a connection from a DataSource object that implements connection pooling:
n The client calls DataSource.getConnection.
n The application server providing the DataSource implementation looks in
its
connection pool to see if there is a suitable PooledConnection object- a
physical database connection-available. Determining the suitability of a
given
PooledConnection object may include matching the client's user
authentication
information or application type as well as using other
implementation-specific
criteria. The lookup method and other methods associated with managing the
connection pool are specific to the application server.
n If there are no suitable PooledConnection objects available, the
application
server calls the ConnectionPoolDataSource.getPooledConnection
method to get a new physical connection. The JDBC driver implementing
ConnectionPoolDataSource creates a new PooledConnection object and
returns it to the application server..n Regardless of whether the
PooledConnection was retrieved from the pool or
was newly created, the application server does some internal bookkeeping to
indicate that the physical connection is now in use.
n The application server calls the method PooledConnection.getConnection
to get a logical Connection object. This logical Connection object is
actually a
"handle" to a physical PooledConnection object, and it is this handle that
is
returned by the DataSource.getConnection method when connection pooling
is in effect.
n The application server registers itself as a ConnectionEventListener by
calling the method PooledConnection.addConnectionEventListener.
This is done so that the application server will be notified when the
physical
connection is available for reuse.
n The logical Connection object is returned to the JDBC client, which uses
the
same Connection API as in the basic DataSource case. Note that the
underlying physical connection cannot be reused until the client calls the
method
Connection.close.
Connection pooling can also be implemented in a two-tier environment where
there
is no application server. In this case, the JDBC driver provides both the
implementation of DataSource which is visible to the client and the
underlying
ConnectionPoolDataSource implementation.
The following sequence of steps outlines what happens when a JDBC client
requests
a connection from a DataSource object that implements connection pooling:
n The client calls DataSource.getConnection.
n The application server providing the DataSource implementation looks in
its
connection pool to see if there is a suitable PooledConnection object- a
physical database connection-available. Determining the suitability of a
given
PooledConnection object may include matching the client's user
authentication
information or application type as well as using other
implementation-specific
criteria. The lookup method and other methods associated with managing the
connection pool are specific to the application server.
n If there are no suitable PooledConnection objects available, the
application
server calls the ConnectionPoolDataSource.getPooledConnection
method to get a new physical connection. The JDBC driver implementing
ConnectionPoolDataSource creates a new PooledConnection object and
returns it to the application server..n Regardless of whether the
PooledConnection was retrieved from the pool or
was newly created, the application server does some internal bookkeeping to
indicate that the physical connection is now in use.
n The application server calls the method PooledConnection.getConnection
to get a logical Connection object. This logical Connection object is
actually a
"handle" to a physical PooledConnection object, and it is this handle that
is
returned by the DataSource.getConnection method when connection pooling
is in effect.
n The application server registers itself as a ConnectionEventListener by
calling the method PooledConnection.addConnectionEventListener.
This is done so that the application server will be notified when the
physical
connection is available for reuse.
n The logical Connection object is returned to the JDBC client, which uses
the
same Connection API as in the basic DataSource case. Note that the
underlying physical connection cannot be reused until the client calls the
method
Connection.close.
Connection pooling can also be implemented in a two-tier environment where
there
is no application server. In this case, the JDBC driver provides both the
implementation of DataSource which is visible to the client and the
underlying
ConnectionPoolDataSource implementation.
Transaction Isolation Levels
Possible interaction between concurrent transactions is categorized as
follows:
n dirty reads occur when transactions are allowed to see uncommitted changes
to
the data. In other words, changes made inside a transaction are visible
outside the
transaction before they are commited. If the changes are rolled back instead
of
being committed, it is possible for other transactions to have done work
based on
incorrect, transient data.
n nonrepeatable reads occur when:
a. Transaction A reads a row
b. Transaction B changes the row
c. Transaction A reads the same row a second time and gets different results
n phantom reads occur when:
a. Transaction A reads all rows that satisfy a WHERE condition
b. Transaction B inserts an additional row that satisfies the same condition
c. Transaction A reevaluates the WHERE condition and picks up the additional
"phantom" row
follows:
n dirty reads occur when transactions are allowed to see uncommitted changes
to
the data. In other words, changes made inside a transaction are visible
outside the
transaction before they are commited. If the changes are rolled back instead
of
being committed, it is possible for other transactions to have done work
based on
incorrect, transient data.
n nonrepeatable reads occur when:
a. Transaction A reads a row
b. Transaction B changes the row
c. Transaction A reads the same row a second time and gets different results
n phantom reads occur when:
a. Transaction A reads all rows that satisfy a WHERE condition
b. Transaction B inserts an additional row that satisfies the same condition
c. Transaction A reevaluates the WHERE condition and picks up the additional
"phantom" row
JDBC Driver Types
Types of Drivers
There are many possible implementations of JDBC drivers. These
implementations
are categorized as follows:
n Type 1 - drivers that implement the JDBC API as a mapping to another data
access API, such as ODBC. Drivers of this type are generally dependent on a
native library, which limits their portability. The JDBC-ODBC Bridge driver
is an
example of a Type 1 driver.
n Type 2 - drivers that are written partly in the Java programming language
and
partly in native code. These drivers use a native client library specific to
the data
source to which they connect. Again, because of the native code, their
portability
is limited.
n Type 3 - drivers that use a pure Java client and communicate with a
middleware
server using a database-independent protocol. The middleware server then
communicates the client's requests to the data source.
n Type 4 - drivers that are pure Java and implement the network protocol for
a
specific data source. The client connects directly to the data source.
There are many possible implementations of JDBC drivers. These
implementations
are categorized as follows:
n Type 1 - drivers that implement the JDBC API as a mapping to another data
access API, such as ODBC. Drivers of this type are generally dependent on a
native library, which limits their portability. The JDBC-ODBC Bridge driver
is an
example of a Type 1 driver.
n Type 2 - drivers that are written partly in the Java programming language
and
partly in native code. These drivers use a native client library specific to
the data
source to which they connect. Again, because of the native code, their
portability
is limited.
n Type 3 - drivers that use a pure Java client and communicate with a
middleware
server using a database-independent protocol. The middleware server then
communicates the client's requests to the data source.
n Type 4 - drivers that are pure Java and implement the network protocol for
a
specific data source. The client connects directly to the data source.
JDBC: User Defined Types
#3
A new Connection object created using the JDBC 2.1 core API has an
initially empty type map associated with it. A user may enter a custom
mapping for a UDT in this type map. When a UDT is retrieved from a data
source with the method ResultSet.getObject, the getObject method will check
the connection's type map to see if there is an entry for that UDT. If so,
the getObject method will map the UDT to the class indicated. If there is no
entry, the UDT will be mapped using the standard mapping.
A user may create a new type map, which is a java.util.Map object,
make an entry in it, and pass it to the java.sql methods that can perform
custom mapping. In this case, the method will use the given type map instead
of the one associated with the connection.
For example, the following code fragment specifies that the SQL type
ATHLETES will be mapped to the class Athletes in the Java programming
language. The code fragment retrieves the type map for the Connection object
con, inserts the entry into it, and then sets the type map with the new
entry as the connection's type map.
java.util.Map map = con.getTypeMap();
map.put("mySchemaName.ATHLETES", Class.forName("Athletes"));
con.setTypeMap(map);
A new Connection object created using the JDBC 2.1 core API has an
initially empty type map associated with it. A user may enter a custom
mapping for a UDT in this type map. When a UDT is retrieved from a data
source with the method ResultSet.getObject, the getObject method will check
the connection's type map to see if there is an entry for that UDT. If so,
the getObject method will map the UDT to the class indicated. If there is no
entry, the UDT will be mapped using the standard mapping.
A user may create a new type map, which is a java.util.Map object,
make an entry in it, and pass it to the java.sql methods that can perform
custom mapping. In this case, the method will use the given type map instead
of the one associated with the connection.
For example, the following code fragment specifies that the SQL type
ATHLETES will be mapped to the class Athletes in the Java programming
language. The code fragment retrieves the type map for the Connection object
con, inserts the entry into it, and then sets the type map with the new
entry as the connection's type map.
java.util.Map map = con.getTypeMap();
map.put("mySchemaName.ATHLETES", Class.forName("Athletes"));
con.setTypeMap(map);
JDBC
#1
Callable Statement is used as follows :
CallableStatement cs = con.prepareCall("{call SHOW_SUPPLIERS}");
ResultSet rs = cs.executeQuery();
#2
By default a Connection object is in auto-commit mode, which means that it
automatically commits changes after executing each statement. If auto-commit
mode has been disabled, the method commit must be called explicitly in order
to commit changes; otherwise, database changes will not be saved.
Callable Statement is used as follows :
CallableStatement cs = con.prepareCall("{call SHOW_SUPPLIERS}");
ResultSet rs = cs.executeQuery();
#2
By default a Connection object is in auto-commit mode, which means that it
automatically commits changes after executing each statement. If auto-commit
mode has been disabled, the method commit must be called explicitly in order
to commit changes; otherwise, database changes will not be saved.