Bill Crim

Rancher
+ Follow
since Aug 23, 2018
Bill likes ...
Mac Mac OS X Safari
Advanced C#, Java newb.
Issaquah WA
Cows and Likes
Cows
Total received
8
In last 30 days
0
Total given
0
Likes
Total received
17
Received in last 30 days
0
Total given
6
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Rancher Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Bill Crim

Parts of the .NET framework code still use the non-generic versions. This is only for legacy reasons, or special use cases. Lots of code uses the older collections as a base class, then adds behavior on top of it.

You should always use the Generic versions. The generic versions are better, they also benefit from new extension methods that target the newer interfaces. IEnumerable<T>(the main hook into LINQ) vs IEnumerable(You use .Cast<T>() to enable to use LINQ). Don't use the older Collections unless there is a specific API that requires them for legacy reasons.  
2 years ago
Web Apps have a concept of "Publish". On a web project, you right click and say "Publish". It then gives several methods of publishing the web app. The war equivalent is to just zip up the deploy folder. It puts the content and the binaries in the right spot, relative to the deploy folder. One of the goals in .NET web publishing is to have an XCOPY deploy. (Where you just copy all the files into a folder without running installers). If you zip up this folder into a zip archive, this is equivalent to a WAR. If you use a build agent, they usually have the deploy artifacts as a zip.

The "Publish" menu option will also let you push the deployment to any number of services (FTP, IIS, Azure, etc). If you have a web project, and say "Publish", it will also build any dependent projects. It is equivalent to "Build and Publish".  

2 years ago

Monica Shiralkar wrote:C# does not have concept of checkered exception


I laughed at "checkered exception". I have often been frustrated with checked exceptions in Java, and I have to say your description is better.
2 years ago

Monica Shiralkar wrote:

When you use var, you're telling the compiler: "Please determine what the type of this variable is supposed to be, I'm good with whatever you say".



But what good does it do compared to directly telling the compiler that use this variable for string or use this variable for int.

whyrell the compiler something it already knows? Also, when using an anonymous or dynamic type, the programmer doesn’t know the type name up front.
2 years ago

Monica Shiralkar wrote:I am trying to understand that either a language needs var which can be used to assign anything or either it needs String, Int , Char etc for the specific types. Buy why both?



Specifying the data type in local variables is redundant in functions where the variable is assigned where it is declared.
2 years ago
"var" is a strongly typed variable, it just assumes the type to be the output of the right-side of the statement.
The compiler isn't confused as to the dataType. So specifying it explicitly it there for humans. The compiler is actually compiling this code.
Thats fine with known types... but there is a reason "var" was introduced with LINQ.

With var it looks a bit cleaner. When you get a more complex transform, the subsequent variable type can get pretty hairy.

When you do "new { <fields> }" you get an anonymous type. It is a readonly class that is automatically named. Still very strongly typed. This helps you when you are pulling data from LINQ, but also in a MVC or WebApi project when you are trying to return JSON to the client.

As a beginner, you should avoid "dynamic". Even experienced people can use it wrong if they don't take the time to understand it. It allows for runtime binding like a scripting language. So when you compile something like this...

The call to "PrintMessage" will compile fine, but at runtime, it will blow up if the underlying method doesn't exist on the object. But it will execute the method if it does exist, regardless of the real type of the object. It also has hooks that will be called if the method is missing. Its kind of like Ruby a bit in that sense. Json-style object-store databases make use of it for dynamic projections also... Like I said, its expert-level stuff.
2 years ago

Platform.NET FrameworkMono(Framework Clone).NET Core
WindowsYes. Some versions are OS componentsYesYes
MacNoYesYes
LinuxNoYesYes



C# is platform independent. Microsoft makes .NET Framework on windows, Mono is the .NET Framework on Mac, Linux, and mobile. If you use .NET Core(This is preferred moving forward), it is cross platform. They removed concepts that were Windows centric and moved them into their own packages. .NET Framework is a shared component installed at the machine level. .NET Core does not use shared libraries. So you are deploying exactly the runtime you compiled it on. .NET Core is driven entirely by the package management system to bring in runtime components.


In Java, each class is compiled to a separate file, which are then gathered into a JAR. In .NET a Project is compiled into a DLL. You can then bundle 1 or more dlls into a NuGet package(NuGet is the packaged manager for .NET) which will also manage dependencies. So a Maven package is analogous to a NuGet package. A Project compiling to a DLL is equivalent to a Jar for a single library. But a jar is a more flexible unit of deploy since you can include any arbitrary java code in it.

In the Windows space, IIS is the web server. IIS, supporting .NET framework, is setup as AppPool(the executable) and AppDomain(the isolation container).
If you are running .NET Core, on any platform, it is just a binary module. You can plug it into IIS or Apache like you would any other mobile.



Something else to keep in mind, on C# they have Properties as the method of implementing getters and setters.

If you have a simple getter/setter, it will also support access restrictions.

This automatically creates private backing variable. The "Property" construct is an outgrowth of a legacy of COM. It would understand that a function called "get_Name()" and "set_Name" were related. So properties are just functions under the hood, and you can create custom backing code if you want.


Many things in C# are done with Delegates where Java would use objects. Delegates are a top-level language feature. Events, Lambdas, LINQ are just forms of delegates. You start threads and use Tasks(Promises) using delegates. Think of a delegate as a type-safe function pointer. If you want to do composition it is generally preferred to use generics and delegates instead of a bunch of objects.
2 years ago
You could also think of it like this...

Determine how capable each sender is. By counting the number of sites they can each send too. you want to keep all senders busy, so the senders with the lowest number of sites they can publish to, should come first. Your pickiest workers should get the first crack at any jobs. You're least picky workers will pick up any remainder.

I would recommend you rearrange your data to be like this...


Annotate-then-Execute is usually a better model than Loop-and-Calculate. Mostly because you can use the debugger to expect the system easily in this model.
2 years ago
It is a Roslyn based interpreter that lets you do REPL based programming in C# similar to other REPL languages. CSX files are used with csi.exe which is the interactive C# REPL.  There is a window in Visual Studio called "C# Interactive" that lets you run this as well.   If you want to interactively program with a REPL, use csx files. OR if you want to make quick snippets to interact with both your Visual Studio and as a command line.

https://msdn.microsoft.com/en-us/magazine/mt614271.aspx
https://www.red-gate.com/simple-talk/dotnet/net-development/going-interactive-c/

In the .NET world, lots of people still use LINQPad to do ad-hoc exploration and execution in .NET languages. But this was because there was no REPL to do quick, ad-hoc programming. However LINQPad is God-like when you are exploring data interactively.


CSX files are not a replacement for Powershell or a replacement for the old Windows Scripting Host(VBScript or JScript)
2 years ago
.NET Core is what it takes to do the runtime. However Visual Studio has designers, editors, debugging that is not part of the .NET Core runtime. So you would want to install .NET core support as a Visual studio component. As a bonus, when you update your Visual Studio, you will also get new versions of .NET Core as well. This is the same with Visual Studio Code. It's just that any extra features are part of the Extensions, so it is a bit easier to just pull what you need.
2 years ago
When viewing a post in the mobile view, the syntax highlighting javascript loads on every page load. Even if there is no BBCode tags for CODE on the page. The desktop view doesn't do this. It only loads the script files if a post uses the code tags.

Over on Permies, where > 50% of the traffic is mobile, this is a reasonable amount of overhead.
3 years ago
The Cattle Drive links were never ported over to CodeRanch.(I am in the cattle drive and was sent to the URL directly) So if someone didn't enter the site through javaranch.com, they would never know about it. If you are wondering why there are no new cattle drive members, that is why.

Also, under the "JavaRanch Neighbors!" section, there are links to http://www.aspose.com/java/excel-component.aspx and others. The Company is still in business, but all the the links are bad.

The whole "Bunkhouse" section is reviewing books written in 2002-2011.

There is a lot that is outdated in the Javaranch site, but lots of links that don't have a corresponding CodeRanch version.


3 years ago
something I missed...

Most things that use JSON as their serialization format, allow you to ignore things you don't understand. Or they allow for partial objects. Especially since Javascript objects can have extra values added to them at any time. I know lots of people who don't use Serializer at all, and instead use something like JSONObjects from the org.JSON package to just access the data as a series of arrays and hashtables. So if you add a new field, I would not worry too much about it breaking the caller.


The other thing to keep in mind. Don't use internal data structures as your source of document data. Create a set of Data Transfer Objects that go with your endpoints. Then use an object-to-object mapping tool to help you fill them out. This helps when you want to create different versions of the REST service.  "test.com/api/v1/Data" vs "test.com/api/v2/Data", then just make sure Version 1 and Version 2 both work. If you have a "Data_v1" and  "Data_v2" objects, you can then use a mapper to create them both from your internal "DataBusinessObject". That way you have 2 simplified paths for data transfer objects, but only 1 path for business logic.

Here is a short article that talks about versioning REST APIs, and a description of all the many ways you can do it. REST API Versioning. There isn't one right way to do it.
3 years ago
The purpose of SOAP(and WSDL) are to be a strongly typed messaging contract that can go over multiple protocols(SMTP, TCP, as well as HTTP). WSDLs were built to describe how the message and endpoints wanted to be structured. This allowed tools to code-generate proxy classes. SOAP was built to let backend services talk to each other.  Since HTTP wasn't the only target protocol, SOAP 1.0 essentially ignored transport security. However, as firewalls got tougher, all traffic just went over HTTP because port 80(Non-SSL) was almost always opened. So they ended up needing to specify some extra security in later versions. While it was built to be a messaging protocol, it ended up being a way to make Remote Procedure Calls. The code generation tools usually transparently wrapped the parameter list into an Xml serialized argument object, and the endpoint was the function name. If you have tools to do this work for you, you can ignore almost all of SOAP's quirks. It had far too much structure, and later security, to be of much use talking browser to server.

REST is designed to just use HTML, with its own security/header/location/error mechanisms, instead of the SOAP envelope and transport. The lack of messaging structure for REST meant that the contract was something between the caller and the receiver, not the protocol intermediaries. This made it really easy to handle in Javascript on the browser. Better serialization architectures on the server usually meant it was fairly easy to parse things into JSON, XML, CSV, or whatever format the client asked for using MIME types. Your code might not even know the difference.

Nothing about REST absolves you of responsibility for having a stable(or backwards compatible) contract with the consumer. If you want to have some documentation of the structure of your REST document, you just need some other mechanism to define or validate. If your document is XML, create an XSD. If you are using JSON, you can use JSON Schema for the data. Or you can use OpenAPI (Swagger) definition. as a more comprehensive REST endpoint/document schema(Essentially a WSDL for REST). It is entirely possible to create a contract so rigid that you replicate all the worst features of SOAP.

3 years ago