fred rosenberger

lowercase baba
+ Follow
since Oct 02, 2003
Merit badge: grant badges
For More
St. Louis
Cows and Likes
Cows
Total received
In last 30 days
0
Forums and Threads

Recent posts by fred rosenberger

Does it have to be a single regular expression?

If possible, I'd do it in two passes...one pass to find strings with "openshift.cluster_id:", and then another that of those, filters out the ones you don't want.

I HATE complicated regular expression.  They are tricky to write correctly, hard to test, and next to impossible to remember what you were doing when you come back to them six months later (or next week, in my case).  Two simple regex's are much easier to follow than one complicated one.
5 months ago
I am "near here", but i'm sure my "here" is very different from your "here".
1 year ago

Tim Holloway wrote:I do remain firm in my belief that Fahrenheit is a much better scale for measuring human comfort than Celsius



Why?  I mean, isn't this really just a "familiarity" issue?  If all you ever used/knew was Celsius, then the Fahrenheit scale would seem pretty strange to you.
1 year ago
Louis Prima was fabulous...
1 year ago

James Sabre wrote:Since your are reading the whole file into memory you can safely re-open the file and write to it i.e.



I would NEVER write the changed lines out to the same file.  Odds are too great that you screw something up, and your original file is lost forever.  I'd always write the output to a new temp file, inspect it to be sure it is correct, and then rename the original as a .old or some such.
1 year ago

Campbell Ritchie wrote:

fred rosenberger wrote:. . . "find the item at position 1,248,423", then yes, an array list is very fast.

Yes, that operation runs in constant time. But it isn't what I would call searching.


Which is why I said "it depends on your definition of searching. In my mind, searching is more or less "find the thing that...."

Find the thing that is called fred.
Find the thing that has the greatest value in the amount field.
Find the thing that is at position 1,287,646.

Remember, we're not starting with any data structure, we're analyzing what we want to do.  If I know that I will frequently need to "find an element at position x", then I will use an array because searching/finding a positional element is constant time.
1 year ago

Mahadi Hasan wrote:[T]he ArrayList is good for searching with optimized time



I think it depends on what you mean by "searching".  If you mean "find the item at position 1,248,423", then yes, an array list is very fast.  However, if you mean "find the element with "fred rosenberger" as the user name, then I'm pretty sure it is terrible.
1 year ago
Caveat :  I know nothing about FB or how they have coded their application.

I'm not sure i understand your first few questions.  "for user registration, which data structure would be best?"  What part of user registration are you talking about?  How do you think you would use a linked list or a hashmap?  Are you talking about having a huge linked list of all users?  or a huge hashmap of them?

Generally, you shouldn't start with an implementation or data structure in mind. You should start off thinking about what you want to do, and THEN look at what might work best.

An arraylist would be terrible for searching the amount of data something like Facebook has.  I'd bet they have terrabytes of data. An array list would be so slow, you'd (almost) never get results.  I'd think you'd need some kind of database that is specialized for large amounts of unstructured data.
1 year ago
What have you tried?  What did it do different than what you expected?

Frank Jacobsen wrote:
Faster = The fastest way this can be done.


If you really want "The fastest way this can be done", buy a better cpu. Then wait a few months, and there will be even BETTER hardware, so you can then that to get more speed...etc.  Make sure you kill all other processes on whatever machine you are running so nothing else steals CPU time. If you are bound by disk reads, get a better disk drive.  and so on...

What we're trying to get at is that this is not a spec.  Something can always be done to make it faster. The question becomes "Is it worth it to make it faster than it is now?"  and/or "how fast is fast enough?".  If it can process each call in 10 microseconds, is it worth it to spend $1,000,000 to get it down to 9 microseconds?  That would be faster.  Then do you spend another $2million to get it down to 8?

Best practices say you define what the speed needs to be before you start optimizing. Then you look at where your bottlenecks are, what it would take to improve each, and at what cost.  Then you decide which ones are worth the cost.  

"The fastest way it can be done" is an unobtainable goal, as there is (for all intents and purposes) no end point.
2 years ago
The best solution for doing things "fastest" is to buy better hardware - CPUs, memory, etc.
2 years ago
or look at the table where it says:

100 99.99997%
2 years ago

Sami radwan wrote:all my objects should be declared final?


It's confusing. I had a hard time as a beginner with this concept.  When you have a line like:

Person person = New Person();

there are two things create.  "New Person()" creates a Person object somewhere off in memory.  "Person person" creates a reference to an object of type Person.

I think of it as the reference being a card in a rolodex, where you write down the address of a house.  And the Person object is the house itself.  If you make your "person" reference final, you are writing the address in ink - it cannot be changed later.  That's not to say you can't go to the house the address points to and paint it a new color...
2 years ago
I second Campbell's question.  Are you assuming a CSV file will be faster for some reason?  The first rule of optimization is NEVER ASSUME one way will be better.

Databases are a pretty mature technology.  The have been honed and tuned for decades to give excellent performance, doing things you are often not even aware.  I would be surprised if parsing a CSV file would be faster than a DB query.

Also, if you are only exporting the data once, does it matter how fast it is?  Had you just started an export three days ago when you created this thread, it might be done by now.  Does the speed of the EXPORT really matter, when compared to the searching time?

and as to your "is it possible..." question, the answer is always "yes, but....".  Yes, you can, given enough time, effort, and money.  A better question is "Is it worth it...", and honestly it seems like "No" is the answer.
2 years ago

Alireza Saidi wrote:Hello dear friend! How can I solve that? Can you help me please?


People here love to help you, but they won't do it for you.  You need to tell us what exactly the problem is.  Does this compile? does it run?  does it give the wrong output?

tell us the details.  Ask questions.  engage with people who post, and you will get more help than you could ever ask for.
2 years ago