Win a copy of Mastering Corda: Blockchain for Java Developers this week in the Cloud/Virtualization forum!

andy kumar

Ranch Hand
+ Follow
since Jun 08, 2009
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
0
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by andy kumar

For a simple requirement like mine, I am planning to just create an enum with lower camel case. The only issue is that it is not the standard way to code enum. Any other suggestions/ comments? If not I will simple go with my solutions of creating enums the non standard way
In a no sql database, it is advised to have a type parameter in all the objects/documents that are saved. This type param is like the table name. So I created the following enum


And when saving the a document, I HAVE do a Type.CUSTOMER.getName(). All this works but I wanted to know if there is a better way to do this.
Pros: standard enum pattern followed everywhere.
Cons: Nothing preventing you to have the same name for 2 different enum values which in this scenario is wrong.(2 tables with same name)
Also if I am using a jackson object mapper, I have to create some kind of custom mapping so that what is saved to database is "customer" and not "CUSTOMER"

If I want to save "CUSTOMER" in the database and not "customer", then all is good as I can simply remove the "name" attribute and we are done. But if I want to store in lower caps one other option is simply to have this:

Much much simpler and solves all the problems listed above, but the thing is that I am not using the standard conventions of using CAPS for enums.
What do you think is the better approach?
Also I though this is a design question so did not post it in "OTHER BIG DATA" forums



thanks for the input, i think i will go with option 3 as I agree with the statement:
"After you serialize and deserialize them, you can build 'smart' business layer objects out of them."
I have a Rest endpoint for getting credit card transactions, valid types are “charge” and “credit”
GET localhost:8080/test/creditcardtransaction?type=charge.
I have the following ways to model my classes on the server side:

Option 1:
Two different classes with common fields repeated.
RefundTransaction{
String accountId;
BigDecimal transactionAmount;
String refundReason;
}

ChargeTransaction{
String accountId;
BigDecimal transactionAmount;
String chargeStatus;
}

Issues:Here bunch of fields are duplicated in both the classes. The client will have to unmarhsal the class to the appropriate type based on the type parameter sent while calling the rest endpoint.

Option 2:
Use inheritance.
abstract class CreditCardTransaction{
String accountId;
BigDecimal transactionAmount;
}

RefundTransaction extends CreditCardTransaction{
String refundReason;
}

ChargeTransaction extends CreditCardTransaction{
String chargeStatus;
}

Good: No duplicate fields.
Issues:The client will have to unmarhsal the class to the appropriate type based on the type parameter sent while calling the rest endpoint.

Option 3:
Moving common fields in another class, use composition:

TransactionDetails{
String accountId;
BigDecimal transactionAmount;
}

RefundTransaction{
TransactionDetails transactionDetails;
String refundReason;
}

ChargeTransaction{
TransactionDetails transactionDetails;
String chargeStatus;
}

Good: No duplicate fields.
Issues: TransactionDetails can be instantiated independently of RefundTransaction or ChargeTransaction (which is not a major concern). The client will have to unmarhsal the class to the appropriate type based on the type parameter sent while calling the rest endpoint.

Option 4:
Using strategy pattern

TransactionDetails{
String accountId;
BigDecimal transactionAmount;
TransactionType transactionType;
}

interface TransactionType{} // marker interface

RefundTransactionType implements TransactionType{
String refundReason;
}

ChargeTransactionType implements TransactionType{
String chargeStatus;
}

Good: No duplicate fields.
Issues: The client can unmarhsal the object. But how will the client know the type, it will need to do an instance of check or we need a new getType() method which defeats the whole purpose have having a uniform class “TransactionDetails”.

Option 5:
Have separate variable for each transaction type
TransactionDetails{
String accountId;
BigDecimal transactionAmount;
RefundTransactionType refundTransactionType;
ChargeTransactionType chargeTransactionType
}

RefundTransactionType {
String refundReason;
}

ChargeTransactionType {
String chargeStatus;
}

Good: No duplicate code. The client will simply unmarshal the object. It will then get the appropriate transaction which is not null.
Issues: Every object of transactionsDetails is bound to have a null for either refund or charge transaction.


Would like to know which is the best option or am I thinking to much and should simple go with option 1 or option 2.
I was going through the refactor technique "replace type code with subclass". the original class is as follows:

Now this has been refactored to :


Now my question is that when we have subclasses like (Salesman and Engineer), does it still make sense to have an abstract getType() method. In other words if there is a class hierarchy is it a good practice to also have class type information and have each concrete class override the getType() method to return its specific type. (where type may be an enum or integer value)
Thanks for the reply. I think I did not ask the question properly. What I am looking for is an example where if i have a hashmap/queue which is shared by more than 2 threads how it will cause an issue and if I replace it with concurrenthashmap/concurrentlinkedqueue, how does it fix it. I know this is a very basic question but any help would be appreciated.
I am trying to get some examples of race condition in hashmap. One example I found is that if 2 thread try to do a put then the array resizing can cause an infinite loop which I understand completely.(http://mailinator.blogspot.com/2009/06/beautiful-race-condition.html)
But I am trying to think of another example of a race condition in a hashmap and how a concurrenthashmap fixes the issue?

Similarly for a queue interface, linkedlist is considered as a non thread safe queue and concurrentlinkedqueue is considered as thread safe. I am trying to find an example of a race condition in linkedlist and how it is solved by using a concurrentlinkedqueue.
I have a multiple consumer listening to a queue and when a message arrives its onMessage() function in invoked. Since there are multiple consumers, each consumer has its own session. The requirement is that the message should be acknowledged only if there is no issues and no exception is thrown...)

AUTO_ACK mode: My understanding is: the message is acknowledged after onMessage completes successfully. If there is an exception in onMessage(), the message will be redelivered.
Client ack: At the end of onMessage(), I explicitly call acknowledge(). In case of error the acknowledge() method will not be called so message will be redelivered.
Transacted Session: I call session.commit() at the end of the onMessage() function, in case of some exception I catch it and call session.rollback and hence the message will be redelivered.

The consumer is ABLE to detect duplicate messages and handle it appropriately. My question is that all the 3 modes are doing the same thing and solving my purpose, so which method is better than the other and why? In short why should I use transacted sessions over client ack or auto mode.
I do not want to use JTA/XA as it is not supported by all jms providers example activeMQ and it will make my application slow.

If my consumer was NOT able to handle duplicate messages, then I understand that the only option I have is to use JTA/XA because all other options can send me the message again which will result in duplicate processing. In JTA/XA also I can get the message again but it would not be considered as duplicate processing as the previous transactions would have been rollbacked.
I have installed jenkins on mac using the mac installer file. I have created a svn repository on my dropbox folder. I can checkout the project from my svn repository from eclipse and command line and it all works fine. The issue is that when I am trying to configure the job on jenkins, it is not able to connect to my repository.
I am specifying the repository url in jenkins as:

file:///Users/TestUser/Dropbox/SvnRepository/trunk/TestApp

but I get the following error:-

Unable to access file:///Users/TestUser/Dropbox/SvnRepository/trunk/TestApp : svn: E180001: Unable to open an ra_local session to URL
svn: E180001: Unable to open repository 'file:///Users/TestUser/Dropbox/SvnRepository/trunk/TestApp' (show details)
(Maybe you need to enter credential?)
6 years ago
Ok I got it working here it is:

<bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="org.h2.Driver" />
<property name="url" value="jdbc:h2:~/myDB" />
</bean>

Now when I start my webapp it will create the DB in my home folder by name myDB.h2.db. I can also initialize my db using the below config:

<jdbc:initialize-database data-source="dataSource" >
<jdbc:script location="classpath:schema.sql"/>
<jdbc:script location="classpath:test-data.sql"/>
</jdbc:initialize-database>

My second requirement was to initialize the DB only when I run my wepapp for the very first time and after that I should use the same old DB, so for that i had to add ignore-failures="ALL" to my init config i.e:
<jdbc:initialize-database data-source="dataSource" ignore-failures="ALL">

Even though I consider the ignore-failures as a hack it solves my purpose right now.

6 years ago
It is possible for you to share the configuration info. Also if I do specify the url, don't I have to manually start the DB before running the app?
6 years ago
I am creating a spring mvc crud application and using H2 as the embedded database using the following config:
<jdbc:embedded-database id="dataSource" type="H2">
<jdbc:script location="classpath:schema.sql"/>
<jdbc:script location="classpath:test-data.sql"/>
</jdbc:embedded-database>

The h2 db works fine for me. My only problem is that whenever I restart the server, all my changes are lost as each time the server starts it creates the DB again. Is there a way to store that information and not wipe it on restart. The reason I want this is because I don't want a separate DB server installed. I want to create a war file and all the client has to do is deploy this file and not care about installing a db. This is a small desktop application which would not have huge data hence I want to use an embedded db.
6 years ago