Win a copy of Event Streams in Action this week in the Java in General forum!

Matt Wong

Ranch Hand
+ Follow
since Aug 18, 2017
Matt likes ...
MS IE Notepad Suse
Magdeburg
Cows and Likes
Cows
Total received
5
In last 30 days
0
Total given
0
Likes
Total received
14
Received in last 30 days
0
Total given
3
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt Green check

Recent posts by Matt Wong

google "javamail" pops up first entry: https://javaee.github.io/javamail/
click on the "EE4J" on the red banner appear at the bottom redirects to: https://projects.eclipse.org/projects/ee4j
select jakarta mail from the related projects on the right: https://projects.eclipse.org/projects/ee4j.javamail > wich shows latest release as 1.6.4 on september 10th 2019 - wich final links to: https://projects.eclipse.org/projects/ee4j.javamail/releases/1.6.4
the 1.6.4 release itself only says "part of jakarta EE 8" - and, although I can't find anywhere to download jakarta ee 8 from 10.09.2019 - 1.6.4 is the current final public release version of "javamail"
InetAddress.isReachable() doc:
"A typical implementation will use ICMP ECHO REQUESTs if the privilege can be obtained, otherwise it will try to establish a TCP connection on port 7 (Echo) of the destination host."
flaws:
1) TCP/7 is obsolete and not used anymore - will be hard to find any active ECHO service
2) TCP/7 would require root privileges to be opened - unlikely on android
3) ICMP also require root privileges - even unlikely - as "userland" java only offer TCP/IP and UDP/IP

So - in addition to "it's a bad idea to flood the whole subnet with SNY on TCP/7 noone will listen to" the tried approach heavy relies on implementation even have rights for ICMP - wich, on linux, needes elevated privileges wich most likely not available.
As I mentioned earlier: use multicast!
Also: Don't rely on that one device will be the server and one the client - each device is both at the same time! So, when using multicast, unless each desired node is already connected, each node listen for multicast as it also sends multicast the same time - thereby able to find other offered nodes and offer itself.
Idk if android provide it's own stuff for multicast - but it should be possible to use SE API and let do the DX compiler do the magic.
https://eclipse-ee4j.github.io/mail/#Latest_News
as of july 3rd 2019 what was once one as "JavaMail (-API)" is now renamed to "Jakarta Mail" with release of 1.6.4

well - there wasn't lots of news - guess as its for a long time now that JavaMail was moved from an openjava project to be part of eclipse enterprise for java (EE4J) project


didn't knew - was surpised as looked for current online doc wich can currently be found at: https://eclipse-ee4j.github.io/mail/docs/api/
well - the use case you wrote makes an answer simple: udp/multicast
First: as the tablets connect wireless - there's chances that this connection gets lost a few times when transition from one zone to another - so, try to rely on a stateful connection might not be the right choice in the first place
Second: as the services seem to be open - security comes into play: so you don't want to roll your own but rely on something kown tested good and secure for such environments
Third: although I guess there's lots of apps already do what you want - when you go on vacation on a cruise ship with your loved one - do you really need to communicate digital?
well - this can be simplyfied: does the AS400 speak TCP/IP? only if so then you can connect with java to it - as java only support TCP/IP and UDP/IP.
If the target does not have an IP stack or only understand ICMP but neither TCP nor UDP - java isn't the right language.
So - try to solve your own question: how to communicate with the AS400 - does it even speak IP? or do you have to use some other protocol not available for java?
Ok, although in this very case I rather wouldn't offload any computation to the clients but all is handled server side - as in my specific "problem" the clients are more like only input forwarders and result displays instead of anything compute themself (pretty much a browser does - accept user input wich then sends a request to the server and displays the response).

nvm - to define and break down - maybe this is where I need help in the first place =D:
If I would asked to try to simply define my goal as an abstract problem to solve, I would try to start like this:
- repeated calculation of given formular based on last value and passed time since last calculation

So, based on this simplified version, I can see that my main loop just would consist of repeated call of the calculation function - wich just takes two inputs: last value and time difference.
A possible start might be, instead of think about the formulars or how to communcate with the clients I should first start to think about how to perform the calculation in the first place.
6 days ago
Well, thanks for the additional input - I checked - but other than energy management (wich is the first I set to max when ever I set up a new system) there is none I can set. The driver only allows me to see current status and change resolution - but that's it.
The only other difference: the main rig runs Win7 - the 2nd machine Win10. So, could it be a OS difference that, for what ever reason, win10 thinks it needs 3d acceleration when my win7 system don't even bother with 3d but all in 2d (as I expect it)?
That's what really bothers me - why it's even rendered in 3D acceleration (and therefore raise the gpu to its high power state when actually no need to) instead of plain 2d drawing? This doesn't seem right.

Oh, and about the format: you used "hh" - wich is 01-12 am/pm - but without am/pm indicator - I changed it to "HH" wich is 00-23 as I'm used to (I'm german, but use the english locale to get english shorts for day of week and month instead of german wich would default to when no locale is set - but actual keep the german standard 00-23 instead of 1-12 am/pm used in many other countries).
6 days ago
~scraped down version~
Well, sadly, the resource usage hasn't improved a bit - still full system usage.
When I run it on my main rig I also only get 1-2 % usage - and my GPU doesn't even use 3D but does it all in 2D as it sees it as part of normal desktop window.
So, it seams that not the java code itself be the problem but somethings worng when its run on the 2nd machine.
According to sources the A10-7800 is a newer architechture than my FX-8350 - so it should run the same stuff with more efficiency - I really wonder why it runs way better on a system overall way older than on newer hardware wich should had no problem running it.
I also tried to reduce it to 1FPS (so, remove the millisecond at call it only once each second) - surprisingly - the system load does not decrease a tiny bit. So, even with this improved code, and also "slowed down" to each second instead of each 1/60 second - the system still use all resources to display that simple clock. Maybe it's something hardware related. I don't have more systems than these two, but it's so strange that a simple clock only updated once a second needs the whole system power.

Don't get me wrong - the 2nd system is powerful enough - it even can live stream 1080p60 with 6.000kBit/s - a very demanding task - wich only use up the system to about 75% of overall power. Can it really be something java-specific related causing that so strange issue? I'm out of ideas ...

Matt
1 week ago
Well, I'm aware that for obvious reasons there can't be such thing as "one size fits it all" - what "bothers" me, that there're not much information to find, or I'm not really able to find them. I also tried to look up some books (even in languages I don't know) - but no luck there either. Sure, I tried to "simplify" my search, not just looking for my specific topic but overall multi-user networked applications and games in general. Unfortunately I was only able to find information about specific frameworks or technologies not matter to me (try to mix up these words in google: browser, engine, game, java - you'll be surprised what comes up).
So, I'm not specific looking for something "ok, teach me how I implement something like ogame in java", well in the end I do, but to get there I need to learn about game development, multi-user interaction, split between backend business logic and frontend presentation - and somehow form that all together to get into the direction I might want. I don't know if I'm just not able to find resoruces for such topics, or if my selection of "what's relevant to me" not what it would be helpful to.

So, if someone know resources for maybe small topics that help to reach the overall goal I'll be thankful, also even books (if possible, please keep it to german or english - or some translated versions), although I currently don't have much money to spare.

Matt
1 week ago

System:
win 10 pro 1803
cpu A10-7800
16GB ram

using this simple 60fps clock (16ms are 62.5 fps) complete use up all of my system resources
win10 task manager say:
cpu: 100% (all 4 cores)
gpu: 95% 3D

How can this simple gui snippet use up 4 cores 100% of 3,72gHz cpu and almost 100% of its IGP just by display current time?
Are there any better ways to display a clock?
Note: Colors have to be green and white as it's overlayed and keyed out (color key) for a video.
Before someone comes and try to smart-arse: even slow down to 2sec (2000L) sleep doesn't solve the issue - system still used up 100% - so change the "speed" of the clock doesn't seem to affect resource usage.

Matt
1 week ago
Hey there,

it's not needed, but know OGame and alikes helps to understand.

Way back in my "early" days (around 10 - 15 years ago) I played a lot of browser games like OGame, HackTheNet, WorldHackOrg and such ... and always wondered: How do these work on the server side backend.
One of those, can't remember wich, maybe even ogame, once released an old version wich was just a few php files and a few notes on how to set it up and that it's important a few files get's called in certain time frames to keep the game up. All it ended up with, as far as I understood it back then, where a few formulars with timestamps based on data stored in the database re-newed each time a player did a action, or for those not active online, done in those time intervals.
A few years later I was able to implement a very rudimentary "engine" (one would call it now days "early alpha"). It had a few buildings and a few resources, just barely enough to have a basic eco like: energy production, resource collection, storage (if you know OGame pretty much basic mines, storage and power plant). It was far from having things like research and even further away from player interaction.

It basicly worked like this way:
- the base clock was set to 10min - so, if some action started, like upgrade a knew building, and this action took more than 10min, the server send to the client: don't pull me in the next 10min - there will be no updates - or, if some action would fin in the next 10min the server would reply: pull again in X amount time - I'll have new updates for you
- as it was implemented in java there ran threads on the client active wait for an action to complete - so, when an action was started, the server replied how long it'll take and the client kept active track no matter of the over all 10min cycle
- the resources where interpolated - means, instead of only display the value calculated on last server action, but active counting - this was done by server send rates for how much of a resource was gathered in wich time (could be negative!) - an addition I would may consider today: based on what the players would like, I would either implement it the way like age of empires: when you want to build something to resources gets taken away all at once and you need at least the amount of resources to order - or like red alert: doesn't matter if you have the resources now, and they wont get taken all by once but over time as the build progress - maybe even mixing both - I would the players decide what they like more

So, as I had these very basics done, one could start to farm resources and upgrade buildings and manage eco and energy.
Adding buildings, units, research - I guess that's not that hard to implement a basic "tech tree" and come up with formulars that can factor in additional benefits like faster process or combine of stuff to make new stuff available (like settlers: you need coal and iron ore to make iron to make tools and weapon).

But what about scaling? Wiki says ogame has 2 mil players - is it really that easy to crunch that many datasets with possible collision of active players getting calculated each action they do instead of regular time interval for offline ones? Also, what about calc errors - the more datasets you have the more time it needs to update all calculations. When you start at the first and get to 10'000s you may already a few seconds in. For a high rank player this lack of precision could mean 1000s of resources missing behind what the actual should have. I can see how this would work for a small community - but for massive servers with thousands of players each may request huge datasets at same time (would end up in timeouts for some of the higher latency connections)?

never mind - I guess that's something to re-plan when you get big.
Another thing I never got my head around: How does player interaction work on such games?

To stick to the example of OGame: I sent a command: attack player X with fleet - wich takes time for the way to flight to player X - then a bit of "action" - and the return. How such events calculated? Active? Or by wich ever thread comes first of one of the players, or if both inactive for more than 10min, just the usual server side cycle? I guess that's a topic I need much help with - it's hard to find such specific information on the net.

Thanks in advance to anyone,

Matt
1 week ago
I total agree with you about DRM - we heared so much bad news. I'm part of the "real fans buy the games" group. Back in the early 2000 I ripped a few games, but scince I started working to earn my own money I bought each game.
I just asked if such would possible cause I'm interrested in comparing compression rates:
- does it matter if compression is used before/after encryption
- does it matter if each asset is compressed by its own or to just put all together and then compress the whole archive
The reason to think about such is when one have 10'000s of assets adding up to 10s of GB but to keep them easy loadable. Also there the structure would be relevant: what assests are always needed, wich are optional - is the loading behaviour predictable at time of creating the archives?
Sure, this is theoreticle as I will never be part of such a huge project. For my own current project: if you want to use it online with others you have to sign up for a client certificate wich is validate by server on connection (TLS has all I need - why not use it?). If someone does not good stuff the certificate is revoked by CRL/OCSP and connection is terminated. This way TLS reject at next connection during handshake client cert validation.

About encryption isn't useful: well, aside from reverse engineer the binary, the gta community got around by not posting the key itself, but a hash along the information, how long the key is and that it's inside the binary. So anyone can just use that hash and a moving window of the keys length to scan the binary -> encryption broken.
Also using different key for each user would be infeaseable - let alone the physical disc copies as one had to master each disk with different key - would cost much money. Even if one would re-key each batch it would not be worth it.
2 weeks ago
this is more a general question than language specific - so any welcome

lets assume this example: I distribut an application (could be anything like physics stuff, a game, what ever) and it ships with some main code and encrypted data files. The protection would be some like this: based on a-symetric crypto a valid license is checked against a server - if the server confirms the license it provides the key to decrypt the data files on the fly. I know this isn't secure, but lets assume a perfect world for this example. To save space I didnt put each data record in its own file but rather use archives to achieve a better compression ration.
For a real world example: GTA 4 and 5 do this, except the key is already present in the executeable and can be obtained without proofing a legitimate license to rockstar.
BTT: GTA uses an archieve format where each file is encrypted by its own with a TOC leading the data, wich is also encrypted. So to load a file, the engine loads the archive header, use it to load the TOC, then finds the location and length of compressed encrypted data and loads it.
My question: aside from this approach where each entry is encrypted by its own would it be possible to encrypt an archive as a whole and still maintain random access? Or does it have to be like GTA does it?

Matt
2 weeks ago
if OP wants to build a global keylogger:

https://kra.lc/blog/2016/02/java-global-system-hook/

I had this code (better: an earlier version of it) in use for some time for some reason - did its job well - don't know how the current version performs.
2 weeks ago
I'm aware that such products exists to serve the pro-sumer market - but to keep to the mentioned example about live stream encoding:
I'm not sure about that, but I'm sure that none of the existing affordable SoC/SoB in the range up to 100 bucks can handle a full hd 1920x1080 stream with 60 fps (we can neglect the audio as this doesn't need much) - even my decent gpu struggles from time to time (depend on how much change from one frame to another is going on) with this load. If one would cluster a bunch of said systems, they may have the raw power - but there's added overhead for split the incoming signal to the next free node and to re-combine the transcoded stream in order - wich each has to be done on its own. So a small cluster I could come up with would consist of at least 4 units:
1 for receiving the stream (no matter if hdmi, dvi, dp, vga, s-video, component, scart, etc ...) and splitting it to the work nodes
2 for splitting up the work load
1 for recombine the transcoded data from the work nodes and send to its destination
I could only see increase the middle stuff by so far until one hit the limit of one of the two outside nodes handling the splitting and combine. But, the effort I would had to put into such a project - aside from learning "cool, can be done this way" - I would use normal consumer pc components just for thier raw power. That's why I can't see these small not-so-much-power-SoC/SoB would scale for such use cases. Sure, if one replace the tiny nodes by big server multi-cpu/-gpu server stuff - I can see the scaling as the workload for splitting would be way less than the outer nodes are able to handle - so it could be used not just for one but for many streams.

I don't want to say these on-chip-boards are useless - in fact, I myself use a pcengines APU2 for my small back up mail server - and this one job is done pretty well - but it's also very high workload (somewhere in the upper 50% than the lower 50%) for this small 4 core AMD with 1 G ram for each core (I don't know if the ram is shared as one 4G between all 4 cores or if they dedicated - but it's measureable slower when data transfered from one core to another than when both are processed by the same core). I just can'T see why someone would bother and cluster them up for thier private home use aside from "learning practice".
2 weeks ago