Claude Moore

Bartender
+ Follow
since Jun 24, 2005
Claude likes ...
IBM DB2 Java Netbeans IDE Spring
Forum Moderator
Claude Moore currently moderates these forums:
Italy
Cows and Likes
Cows
Total received
38
In last 30 days
2
Total given
5
Likes
Total received
101
Received in last 30 days
2
Total given
249
Given in last 30 days
2
Forums and Threads
Scavenger Hunt
expand Rancher Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Claude Moore

In the last few weeks, I spent almost my spare time playing a bit with Deep Reinforcement learning. I could say that I rarely experienced such a frustrating experience before.
As a software developer / software architect, I can say I'm really used to have to study new technologies and to shift my mental point of view. But RL is a damn evil beast to tame.
First, the impression I got is that the whole field is (still) really brittle, at least if you are not a real expert.
I mean, while developing software, you know you can rely upon some consolidated best practices, some recipes that, more or less, can help you to solve your problem and go a step further.
With RL I think it's not the case. I was playing with Open AI gym, which provides you with some "enviroments"  your "agents" can play with.
If you try to solve a problem with a too much complex model, you will fail.
If your model is too simple, it will fail.
If you change some of the hyperparameters in your neural network, you may risk to fail.
What lets me astonished is that RL seems a big puzzle that may be solved with a lot of experimentation, and this seems to be pretty equivalent to state "it's solved mainly with the help of a good dose of luck".
something I believe is irreconcilable with Science, in its general term. And with an huge amount of time, and data.
I have to admit I'm a beginner swimming in a sea of ignorance (mine ignorance, of course),  but while I made some progress, I'm really upset and I wonder if someone else here experimented my same experience.
Thanks in advance.
With respect to your question about which database to use,  my favourite choice when I want to play with some technology is MySQL. It's also one of the most widely used in tutorials all over the web.
Thanks for suggesting us this library. IMHO at the moment the best choice to practice AI is using python+ keras, but as a seasoned Java aficionado I hope that frameworks like the one you suggested will grow more and more: AI is definitely the next big thing (not sure if 'next' is appropriated)

Sriram Sharma wrote:
Paul,
So you had only one database for all services. Is it?



I think to understand what you're asking between the lines: if each  microservice should use its own database.
Well, in theory each microservice should be completely isolated from other microservices, of course except for the API it exposes. In this sense, use separated database make perfectly sense.
But in practice I could not say if it's always a good idea. For example, let's suppose that in the scenario you described a microservice A is responsible for registering orders, and calls microservice B to book goods in the warehouse.
Internal representation of "inventory item" could be in theory very different for each microservice. And that's right, because in theory you could replace microservice B with something else. But what happens when you need to create some report
which needs to join orders and inventory data, for example how many goods have been shipped last month , to fullfill which orders ? At the very end, you may want to build a report service that directly accesses to both databases managed by A and B, or, otherwise,
you need to manually get orders data from A, inventory data from B and manually put them together. I don't think that's an handy process....
In my humble opinion, it's better if the database is the same, and to impose some strict rule to prevent services to by-pass each other, by accessing directly tables they are not responsible to manage.


3 weeks ago
Generally speaking, when dealing with microservices you have to give up with the 'classical' concept of (global) transaction in favour of eventual consistency.
I suggest you to read Martin Fowler's article about microservices trade-offs.
In a nutshell, when using a monolithic architecture, is more or less easy to wrap all services operations in a single, distributed transaction which of course can only be commited or rolledback.
One of microservices' cornerstone is that each microservice is responsible only for its own managed data - data that may reside on different database - with no global transaction manager that can help you in avoiding inconsistencies.

You need to design carefully your application to manage consistency "manually". In the example you posted, for example, you may want to only "reserve" items in your inventory / warehouse when an order is made by a customer.
Required goods are marked as "reserved" and only when the payment is fulfilled, you can mark them as "sold".
3 weeks ago
In the sceanrio you described I would not refactor Service B. Personally I tend to keep REST services more coarse grained as possible,  preferably as a composition of more service classes I write as if they were library classes. I can't see the point of calling 4 different endpoints, to perform relatively simple operations like the ones you are talking about.
1 month ago

Carey Brown wrote:Exceptions should not (in general) be used for flow control.



That's interesting. What do you mean for flow control ? I've wrote at least a dozen of REST API, all of them following this pattern:

- check phase : check if parameter passed are valid, if not throw a proper exception;
- business logic execution: execute business logic and throw exception only if an unrecoverable error is met.

For example, one of such APIs executes reservations of goods in a warehouse to a given customer order. During the check phase, my API throw an exception, for example, if an already fullfilled order is passed as parameter.
To be honest, I don't think - but I could be wrong, of course !! - that doing so were a bad practice. Instead, localizing all formal controls (i.e like the check if given order is valid or not) at the very beginning of my API led me to write more clear code.
What am I missing ?
1 month ago
Thanks for getting back to us and let us know you found a working solution !
Let me say I am still puzzled with this fast pace in new Java releases...
2 months ago

Nathan Milota wrote:

Claude Moore wrote:
I don't know what requirements I need.  I have a PC and Windows 10.


What Cpu and how much memory do yo do you have?

2 months ago
When does your Eclipse installation freeze ? During autocompletion of code ? When you run your application ? What else ?
As suggested by Liutauras, are you sure that hardware requirements are satisfied?
2 months ago

Liutauras Vilda wrote: However, if there are specific problems with Eclipse on your machine, you can try other IDE(s) as IntelliJ, NetBeans.



Seems that STS 4.0 isn't supported neither on Netbeans nor Intellij.
2 months ago
I would suggest you to talk to your system administrator. The trace you posted looks like a normal shutdown operation issued by a remote agent controller,not a failure.
2 months ago

Peter Rooke wrote:
I do think Sun made a mistake of continuing to use the Enterprise Java Beans name with the improved version 3.  


Well, a bad fame is something you definitely have to deal with when selling something, and when you are proposing a technology stack, you must take in account how good a technology is perceived, not only how good a technology is. Keeping EJB name didn't helped, no matter that an Ejb 3 has nothing to share with the cumbersome complexity of Ejb 2.0. Something very similar may happen with Microprofile initiative.. Most of the programmers I know think that stuff isn't anything better than old-fashioned Java EE pruned here and there, and their approach is quite always "please, stay away from me".  Personally I believe that being able to choose which part of the Java EE (now Jakarta EE) adopt for your own real needs it's a good thing.Personally I think that if microprofiles were thrown in the IT arena years ago,  it would be a great benefit for Java  for enterprise context.
2 months ago

Tim Holloway wrote:
I can think of only 2 other fad blunders of that magnitude: Building major systems based on CORBA (remember CORBA? Millenials won't get this!). And building major systems on OLE/ActiveX.



... and I would add Applet-based systems as well to the list of broken promises.
About OS/2, I think it was simply too eager of resources for the time it was released. I remember one of my friends that installed  it on his home PC and was desperated for the huge (for the time) quantity of memory it needed.
2 months ago