Ranadhir Nag

Ranch Hand
+ Follow
since Mar 09, 2006
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
In last 30 days
0
Forums and Threads

Recent posts by Ranadhir Nag

I have a bean which I am trying to create through a instance factory method ,which takes in parameters.The bean and factory class are given below along with the metadata.


spring-context.xml:


When I execute the following test ,i get a null-pointer - need help in understanding the issue

@Autowired
@Qualifier("commands")
private commandManager commands;
@Test
public void testCreateAddService() {
addservice=commands.createAddServiceInstance("Haven",90897);
assertNotNull(addservice);
}

Testcase: testCreateAddService took 0.5 sec Caused an ERROR null java.lang.NullPointerException at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:517) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1029) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:925) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:490) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:461) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:314)
11 years ago


I have multiple implementations of an employee interface - each one representing a particular employee type based on salary classification(HourlyEmployee,MonthlyEmployee etc.)
Each Employee type contains a PayClassification member.The PaymentClassification class has a single member 'schedule'.


A client class ExecutePay instantiates a particular EmployeeType and queries for its PaymentType.
The metadata file is as follows:


TO have PayClassification have a unique value of 'schedule' for each employee type,the easiest solution is to create a unique PayClassification 'resource' corresponding to each employee type and autowire that.
But that is not an ideal extensible solution.
Is there a way to autowire PayClassification with a unique 'schedule' value for every employee type being defined in the system dynamically.
11 years ago
This is my current configuration:
XSD



WSDL


However,the moment ref="Stock" is changed to type="Stock",the wsdl2java starts giving Type {http://stock.com/schemas/services/stock}Stock is referenced but not defined.

Somehow it seems a clash between wsdl and xsd imports - but I just cant resolve it.Help is appreciated.
12 years ago
I have a Stock ,which contains and array of Quotes.
I am trying to pass an array of such stocks to the client from a Axis 1.4 built webservice.
Here are the relevant snippets from the xsd,wsdl and implementation files:


I get an exception:
org.xml.sax.SAXParseException: Element type "Stock" must be followed by
either attribute specifications, ">" or "/>".

However,a String array is marshalled just fine.
Is there something I am missing - help is appreciated
12 years ago
Is that a complete explanation of this behaviour?
It is true that I am modifying 'ref' to update its next element.
However,it is only 'tail' which is instructed to refer to a separate element down the chain.
Head should still keep pointing to the same 'ref' - albeit a 'ref' with a modified 'next' element.

And,I get the correct behavior if I use a volatile instead of atomicreference - but it isn't fun till we know why something is not working.
Heres an attempt at writing a concurrent linked implementation using CAS.THe glitch is that the head node moves along with the tail - though I have not done anythig to casue it to do so. It may just be a careless mistake - but would appreciate help in where I am going wrong. As elements are added at the tail and the tail is shifted - so does the head,whereas it is not supposed to do so.

..... .... And the Node member class:
A nativeLRUCache(not using LinkedHashMap):

public class NativeLRUCache<E,V>{

HashMap<E,V> data=new HashMap<E,V>();
NativeLinkedQueue<E> iq=new NativeLinkedQueue<E>();
final Integer bound;
public NativeLRUCache(Integer bound){
this.bound=bound;
}

public void put(E e,V v){
if(iq.contains(e)!=null)//=>iteration
{
iq.displace(e);//=>iteration - bringing e to the head
data.put(e,v);
}
else{
if(data.size() < bound){
data.put(e,v);
iq.put(e);
}else{
E cull=iq.accomodate(e);//=>removing tail item to accomodate 'e' at the head
data.remove(cull);
data.put(e,v);
}
}

}

What will be the most optimum locking model here?
The idea is to inspect/update the hashmap and the linkedlist as a single atomic operation - and allowing concurrent readers/iterations.
Would appreciate and welcome any attempts - 'synchronized' and reader/writer(reentrantlock) seem too restrictive and complex respectively
Read the following:
'The LinkedList class is often used to store a list, or queue, of work elements --
tasks waiting to be executed. However, the List interface offers far more flexibility
than is needed for this common application, which in general only inserts elements
at the tail and removes elements from the head. But the requirement to support the
full List interface means that LinkedList is not as efficient for this task as it might
otherwise be. The Queue interface is much simpler than List -- it includes only
put() and take() methods, and enables more efficient implementations than
LinkedList.'

What is the Queue providing us that the LinkedList doesnt?
Why and in what circumstances will I go for a Queue?
In-depth analytical comments will be much appreciated.
12 years ago
This is seemingly an elementary question.
If we create a MQ or JMS queue connection - does it need synchronization for multiple threads to be able to send message to it?

MQEnvironment.hostname = hostname;
MQEnvironment.channel = channel;
MQEnvironment.port = portNumber;
qMgr = new MQQueueManager(qManager);
openOptions = MQC.MQOO_INPUT_AS_Q_DEF | MQC.MQOO_OUTPUT;
Queue = qMgr.accessQueue(outQueue,openOptions, null, null, null);

Can this 'Queue' handle be passed over to multiple threads to send over messages without synchronization?
If not, why?

Answers with proper justifications will be really appreciated.
NOt exactly sure how a blocking queue may help in this case - maybe a task queue with threadpool?Though not sure how.
The idea that I needed help on is whether a used-case like this - where every read is accompanied by a removal of the read data from the container is a good case for ConcurrentHashmap.
This is considering the fact taht a set of threads will be oeprating on the container at any given time.
Or is it better to go with a HashTable(fully synchronized).
I need to select the most apt data structure to hold a set of xml strings.
These strings wouldbe dispatched to MQ by a set of threads - each thread deleting off the string it has picked up, from the container.
Moreover,when the pending number of messages reduces below a threshold - the container is replenished with a fresh set of additional messages.

Given this scneario where every message read from the container ,also results in a write(deletion) - whats the best available option.
I was planning to use ConcurrentHashMap(or a derived ConcurrentHashSet) - but think HashTable to be equally apt.

I will be doign a perf tes tfor both - but wanted to get an opinion in any case.
A completely fresh line of thinking (data structure) is most welcome too.

I believe to have read that RunTimeExceptions raised from session beans do not propagate as-is to client;and cause beans to be removed from the pool and obliterated.
We have a stateless session bean throwing an IllegalArgumentException.
This get wrapped as EJBException at the client which is as expected.
However,we do not see the bean instance being discarded from the pool ( ejbremove is not invoked).

I would have expected the ejbremove to have been invoked for this bean instance.
THe pooling parameters are:
<stateless-session-descriptor>
<pool>
<max-beans-in-free-pool>50</max-beans-in-free-pool>
<initial-beans-in-free-pool>5</initial-beans-in-free-pool>
</pool>
</stateless-session-descriptor>

Any help is appreciated.

I have inherited code where I have a bean (stateless session) where all it's remote methods use container managed transactions. Inside some of these methods JDBC is used directly, with auto-commit turned on(assuming thats the default in weblogic 10.x).
Further ,there are explicit connection.commit at some places - and a connection.preparestament('begin transaction')/('end transaction') execution at others.

Do these JDBC statements subvert the CMP configuration?
What do I need to ensure if I am reviewing this code in terms of sanity?

(I am assuming that in case of BMP - the JDBC connection settings hold supreme anyways.)
Is there way for a NON-DURABLE subscriber to retrieve the unconsumed messages from the topic ;in case the previous connection/subscription was accidentally closed?
The messages sent to the topic are persistent.

We are on weblogic 10.0 - but suggestions are welcome for the later versions too.
Appreciate the help
I believe that we prefer to validate all business scenarios in teh action class;rather than the ActionForm - simply because ActionForm are meant to be contain application state that are driven by the business conditions.
So,validating business in the form is risky as the same form may be used in different scenarios needing different state and validations.

But is there any other technical reason(vis-a-via a design decision) on why the action class should be preferred over the actionForm for handling business scenarios?
12 years ago