• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Tim Cooke
  • Campbell Ritchie
  • paul wheaton
  • Ron McLeod
  • Devaka Cooray
Sheriffs:
  • Jeanne Boyarsky
  • Liutauras Vilda
  • Paul Clapham
Saloon Keepers:
  • Tim Holloway
  • Carey Brown
  • Piet Souris
Bartenders:

Using Http POST like GET

 
Bartender
Posts: 669
15
TypeScript Fedora
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hello all,

I would like to know your thoughts on using HTTP POST as a GET request.
The reason would be to send data in the body because it's too much to fit into anywhere else.
The data would not change anything server side, but there is so much data that that is used to affect what is returned.

Personally I think this is a code smell.
Is it really that smelly?  
How would you approach a situation where you need to send large amounts of data in a GET request?

I would assume there is some design flaw, but not sure how else you would accomplished that.

Thanks,

AA
 
Marshal
Posts: 4796
601
VSCode Eclipse IDE TypeScript Redhat MicroProfile Quarkus Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
When HTTP is used as a transport for RPC calls, it is normal to use POST to invoke a procedure on the remote/server side.

Is the API an RPC type (action focused) or a RESTful type (resource focused)?
 
Bartender
Posts: 15741
368
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
It depends on what the data is, and what the server needs it for.

There are various situations in which a GET isn't appropriate, even if the request doesn't change the server's application state. For instance, if the payload contains sensitive information, you want it protected by TLS. That is not possible with a GET request.
 
Saloon Keeper
Posts: 28667
211
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Stephan van Hulst wrote:... if the payload contains sensitive information, you want it protected by TLS. That is not possible with a GET request.



I'm not so sure about that. I'm fairly certain that an https GET is encrypted from the, er "get"-go. Actually targeting the server is a bit harder to protect, since you cannot route to a (possibly-resolved) IP address without the destination IP in the TCP packet, but the first thing that happens when you send a request to open a listening server port is that encryption is negotiated even before the URL itself (and its GET info) is transmitted.

However, GET was never intended to send large amounts of data, and originally GETs were limited to a fairly small length, often 1024 characters or less. POST was specifically designed for the purpose of larger payloads. POST is theoretically unlimited in payload size, although servers typically have a cutoff point in order to discourage DOS attacks and the like. Systems that receive large images, ZIP files, videos and the like may allow payload sizes in the megabyte range.

The original intent of GET wasn to request (GET) data from the server, perhaps aided by a few identifying/qualifyin parametes. The original intent of POST was to literally post data to the server. However, we have re-purposed these verbs to allow for things like AJAX, ReST, and the like and in some cases adopted conventions to allow a consistent verb usage across heterogeneous transfers within a given framework. We could have added new HTTP verbs for things like long data-in/long data-out and such, but instead we simply futz around.

So long and short of it, yes, I'd POST.
 
Stephan van Hulst
Bartender
Posts: 15741
368
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Sorry, what I meant to say is that it is not secure to send sensitive data with a GET request, regardless of whether the request itself is encrypted.

GET requests don't have a body that is kept confidential. You can only put data in the URL and in request headers, and those have a nasty tendency to end up in browser caches and server access logs.
 
Bartender
Posts: 2449
13
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
With regard to "using POST as Get", POST is non idempotent while GET is idempotent.
That means calling GET multiple times return the same result.
But calling POST multiple times will end up creating multiple resources.
 
Stephan van Hulst
Bartender
Posts: 15741
368
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

But calling POST multiple times will may end up creating multiple resources.

 
Tim Holloway
Saloon Keeper
Posts: 28667
211
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Stephan van Hulst wrote:Sorry, what I meant to say is that it is not secure to send sensitive data with a GET request, regardless of whether the request itself is encrypted.

GET requests don't have a body that is kept confidential. You can only put data in the URL and in request headers, and those have a nasty tendency to end up in browser caches and server access logs.



I'll accept that. It's less of a security risk, since the plain-text URLs are only visible at the endpoints, and not in transmission, but the more places plain text is hanging around, the more opportunities for an invader to break in and slurp them up. Same reason why Java prefers to deal with passwords as character arrays (which can be blanked immediately after use) instead of Strings (which have to wait for garbage collection).
 
Tim Holloway
Saloon Keeper
Posts: 28667
211
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Himai Minh wrote:With regard to "using POST as Get", POST is non idempotent while GET is idempotent.
That means calling GET multiple times return the same result.
But calling POST multiple times will end up creating multiple resources.


In theory. As I said, those verbs are often used in ways that defy the original intents.

GET wouldn't be impotent if it was used to obtain a display that updates in real time, for example.
 
Bartender
Posts: 322
12
IntelliJ IDE Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Someone will probably disagree, but I can't stand HTTP and I wish it would go away entirely. I see it as legacy trash that we're stuck using, just like JavaScript. lol

So whenever a situation like the one presented here appears, I just do what I need to get the job done in the most secure and reliable way, not worrying about how it's "supposed" to work at all...
 
Stephan van Hulst
Bartender
Posts: 15741
368
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Yes, I'll disagree.

HTTP 1.1 is a fairly well engineered protocol that does a lot of things right. I'm not quite sure why you think it's trash, but I'll take this opportunity to point out that getting the job done in the most secure and reliable way involves using HTTP the way it's supposed to.
 
Lou Hamers
Bartender
Posts: 322
12
IntelliJ IDE Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I'm just exhausted from all the old web tech that sticks around forever and will probably never be improved on, not just this one. For one it fell behind and created a need for WebSockets, I think because the request/response model was just not going to work for a lot of modern use cases.

I've never been a fan of these HTTP method things either for reasons I can't pin down concisely. They seem arbitrary to me, often mis-used, and I feel like it's conflating concerns that should be at different "layers" (as in OSI model layers). Look at this ancient looking thing "CONNECT":
https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/CONNECT

Last I learned, I'm pretty sure TCP and TLS are below layer 7 in the OSI model where I think most HTTP stuff belongs. Then there's "TRACE" which is for debugging, I have no idea who even uses that thing or why it needs to be a request method. And of course you have weird cases like what the original post here raised.

I don't even see the benefit of having this stuff anymore - just let us define and design our APIs ourselves at the application level. It's like the "network guys" tried to get involved with "application developer" business or something, and now we're supposed to follow this dogma by the book or else we're doing it wrong? I'd prefer a much simpler standard which doesn't come with so many rules and requirements, that leaves more design up to the application developer.

I'm open that maybe I'm just wrong, this is outside the realm of Java and not a topic I'd claim to be an expert on. I can accept that it was well designed when it was created, but that was decades ago.

I could go on but I don't want to derail the thread too much if the OP doesn't feel their question has been answered yet. I think I've had similar situations before where switching from GET to POST worked fine, so I just did that and moved on.
 
Tim Holloway
Saloon Keeper
Posts: 28667
211
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
There are 2 ways to handle communications between a client and server. One is the continuous-connection method used by traditional time-sharing systems such as IBM's TSO.

The other is request/response. Well, actually there's a third, which is permanent connection, but I don't think you want to surf the Internet with that.

The problem with continuous connection is that it locks down a pair of TCP ports for as long as the connection exists. We have a lot more available ports these days, but I don't think that you could run Amazon.com with only 65535 connections per server machine (less overhead such as ports for DNS, DHCP, and so forth).

The request/reply model is a lighter-weight protocol where you only need to lock down ports between time of request and end of reply. A similar model was used on, for example, IBM's CICS, where again, resources were not continuously available, but only on demand. CICS predated TCP and used hard-wired or dial-up sessions, but the general architecture was more similar to HTTP than to TSO.

Because ports are not locked down for long periods, the number of possible clients is much larger when using HTTP.

HTTP also carries one very useful attribute that was common back in the early Internet days. It's text-based. If you need to test an HTTP server or SMTP mailserver, you can use telnet to brute-force conversations and ensure that the infrastructure is all in place and operational. Perhaps more importantly, with the aid of simple code page translation an EBCDIC-based IBM (non-PC!) computer can carry on conversations with machines running under ASCII.

Not every application needs full 2-way synchronous communication. And if I understand the Web Services protocol correctly, it's more overhead than running a single continous 2-way channel.

Having a separate binary API for everything we do would be a nightmare. Remember CORBA? Who uses Java's own RMI? Well I have, but not over the Internet.

As a general rule every Internet protocol has one or more standard ports. To be secure on the open Internet, we firewall as much as possible and often perform deep packet inspection. So having a different port for each API (a là CORBA) and programming a site's allowable rules for each API would be very expensive.

We passed the need/desire for ultra-high efficiency in computing somewhere between the time when a mainframe computer ran $10 million or more and people's wrist-watches exceeded said mainframes in processing speed and even RAM. On the one hand I hate it because it has made "Git 'er Dun!" the rule of the day and if it crashes, you just tell people to "turn it off and bacl on again". On the other hand, the complexity of modern-day systems has reached a level where a custom low-level API for every service is not considered tenable.

I should note that while ostensibly primitive, HTTP support over the years has received a number of invisible performance enhancements. The overhead of opening/closing a port for each HTTP request was reduced by transparent "keep-alive" functionality. Client and server wil often negotiate for overhead-reducting functions such as transparent pver-the-wire data compression.

Finally, it should be noted that the primary protocols for the Internet are defined in the RFCs and one of the things an RFC tries to do is define a protocol to be powerful, but simple. As with Unix philosophy of taking many small programs and linking them together in place of one enormous TRON-style Master Control Program, Because you never know what direction may prove to be then Next Big Thing.
 
Tim Holloway
Saloon Keeper
Posts: 28667
211
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Afterthought.

The reason we call it the "Internet" is that unlike earlier ttimes when computers were centralized and networks only connected terminals within their local net, the Internet connects many networks together. Do a "traceroute" even to another local server and likely it will list a half-dozen or more intermediate nodes. Just think of what would happen if backbone nodes had to lock down their ports for continuous-connection services.

Oh yes, and I forgot. PCs mostly use bytewise-discontinous memory organization. IBM mainframes, Motorola CPUs and various other systems are bytewise-continous. A text -based protocol doesn't have to deal with "endian" problems. Binary prototocols, on the other hand…
 
Lou Hamers
Bartender
Posts: 322
12
IntelliJ IDE Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Oh, Tim I definitely recognize the need for the request/response model for scalability. Based on what I've read, newer HTTP specs and WebSockets support a form of continuous connection. So now that seems like we have another competing technologies issue, I guess because HTTP was slow to update. But I could be overlooking something there.

What I was trying to focus on is the protocol. HTTP request method/verb stuff like POST/GET/PUT/DELETE like Al was asking about grind my gears or something. POST sounds dated and maybe it should be SUBMIT. The way GET works with the length limitation that Al raised here in the OP is pretty annoying. POST and PUT have some weird overlap and I feel like both could probably be named better, or even combined. CONNECT feels like it doesn't even belong there. (Why no DISCONNECT anyway?) Why is there no SEARCH or QUERY?

Then they try to specify if a call should be idempotent for example, well there's no guarantee anything is going to be idempotent since it's up to the end developer. https://developer.mozilla.org/en-US/docs/Glossary/Idempotent

Note that the idempotence of a method is not guaranteed by the server and some applications may incorrectly break the idempotence constraint.

Ah okay, well that's... nice. Apparently this is overlooked by application developers on a frequent enough basis that they saw a need for this statement. This is very tangental to the "Do not reload the page!" messages we used to see. Since we're talking about "smells" (hah), that one stinks pretty bad, what's to blame for that? When we have to rely on the user to do the correct thing every time, that sounds like one of the worst design fails I can think of!

So I don't think I can disagree with anything you typed. But I don't know if the protocol updates do enough. Nobody wants to break old stuff so we can never really improve or redesign anything touching the Internet. I find that extremely frustrating. In 2100 or 2200 when we have space colonies and flying cars, are we still going to be using HTTP and JavaScript? I'm almost glad I won't have to see what a mess that'll be.

Or maybe a Carrington event will happen and we'll get rid of it that way...  
 
Lou Hamers
Bartender
Posts: 322
12
IntelliJ IDE Linux
  • Likes 1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Oh, Tim I have one afterthought I want to add too. The depth of your historical technical knowledge never fails to impress!!!
 
Tim Holloway
Saloon Keeper
Posts: 28667
211
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I get my historical knowledge from 2 sources. First, from antiquated and/or historical computer texts in the local library back before I ever even managed to see a computer or terminal. Secondly from having been an active participant in the field for a depressingly long time. I counteract that by my increasing senility and being too lazy to look up details.

The problem with HTTP in particular is that the prime driver of an HTTP conversation is a URL request with headers and optional body. The very first (text!) line is the request statement itself, which consists of a command, a resource path, and a protocol version identifier, viz:

The choke point from your point of view, I think is that there is only a very limited set of command verbs defined on the protocols, The idea being that they can provide basic routing (say to the "doGet(), doPost()" httpservlet methods and it's up to you to refine the process as you see fit.

In actuality, a lot of servers filter requests so that maybe only GET and POST are accepted for security reasons. Why no "SUBMIT"? Because both GET and POST submit form requests, just differ in where the name/value arguments are located.

Some verbs, such as DELETE are intended only for things like WebDAV. And incidentally, CONNECT is used to set up an SSL channel between client and server. I'm thinking that that's what allows certain websites to provide interactive "operator consoles/SSH sessions" on web pages, but I haven't checked in details. Presumably no DISCONNECT because disconnection comes when the client logs out of the CONNECT.

HTTP is versioned, so if anyone comes up with a new-and-improved version, it'll pop up with a new version number and enhanced super-powers. And, of course, if someone can devise a super-protocol that works in a completely different way, well, the Internet tends to adopt what works.
 
Rancher
Posts: 4804
7
Mac OS X VI Editor Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Lou Hamers wrote:Someone will probably disagree, but I can't stand HTTP and I wish it would go away entirely. I see it as legacy trash that we're stuck using, just like JavaScript.


I agree. There has been efforts to replace the HTTP 1.x crud with something more modern. See HTTP 3 standarization What I find worse than just basic HTTP 1.1 is folks implementing nearly the whole OSI stack on top of HTTP 1.x.

HTTP 3 uses UDP and is encrypted from the start. In theory, its well supported today.

Javascript makes me cringe. I can't stand it, never have, never will. Typescript and Dash were attempts, but all of this is living in 1995, and needs to move
forward 30 years.  All IMHO, of course.

Back to the OP, putting lots of cruft in a URL to make GET do the work of POST is design smell.

And wow, idempotent network responses, another great idea from the 1980s.....
 
Al Hobbs
Bartender
Posts: 669
15
TypeScript Fedora
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Thanks for all the replies.  
As for my situation, basically parts of the UI state are sent in the POST body.
For example, answers to questions are sent because it could affect the response.
The user answers could also be sensitive.

It seems like it isn't so bad after all to use POST request idempotently.  
 
With a little knowledge, a cast iron skillet is non-stick and lasts a lifetime.
reply
    Bookmark Topic Watch Topic
  • New Topic