Nice explained, but i need to understand why you use int sometimes anf float, double other times. It makes no sence for me
That's an excellent question, and it's one of many things I get into more detail on in my book.
Honestly, in this particular tutorial, I chose int, double, and float at random. I was mostly just picking different types as a way to show that they can all be used in this math stuff. So if your question here is, "Why in these specific cases did you choose int or whatever?" the answer is, "there is no real reason. I just picked them at random."
At a more general level though, there is some method to choosing which type to use.
Your first real choice is between the integer types (int, short, long, byte, uint, etc.) and the floating point types (float, double, decimal). You make this decision based on whether you can be certain you're only going to use integers (whole numbers and their negatives) in which case you'll need one of the integer types, or if you need fractional/decimal numbers, in which case you'll need one of the floating point types.
Once this decision has been made, it all boils down to accuracy and range of values that you need.
The signed integer types, sbyte, short, int, and long, all allow you to supply a negative value, while the unsigned integer types do not (byte, ushort, uint, and ulong). If you know you need negative values, then you simply aren't going to be able to use the unsigned integer types, and must choose one of the signed versions. From there, its a matter of looking at their ranges and deciding which one has you covered, but that keeps the memory usage small. sbyte and byte use only one byte, short and ushort use two, int and uint use four, and long and ulong use eight. byte gets you up to 256. The short type goes from roughly -32,000 to +32,000. The int type goes from roughly -2 billion to +2 billion. The long type goes from roughly -9 quintillion to +9 quintillion.
On the floating point side, float gives you 7 digits of precision, while double gives you 15 or 16 (depending on a variety of things). float can get up to 3.4 * 10^38, which is a huge number, but double can go up to 1.7 * 10^308. So double gives you an absolutely massive range, even way greater than float which was already gigantic, plus it gives you a lot more precision. But a double is 8 bytes compared to 4 for a float.
Choosing types is all about finding the right balance between the data ranges and requirements, balanced against memory usage.
Let me rephrase all of this in a more practical way though.
In most cases, you don't have to worry too much about a few extra bytes here and there. There's enough memory that a little extra memory isn't going to hurt you much.
Because of this, most programmers tend to use int when they can use an integer type (as opposed to trying to optimize too much and using short or byte. Even if you don't think you're using negative numbers, and could theoretically use one of the unsigned types like ushort or uint, most programmers won't actually use these types unless they can basically guarantee that their numbers fit into the range of those types exactly.
Most programmers won't use long or ulong unless either some sort of spec dictates it (like an RFC for networking packets) or they bump into a problem where int turned out not to be good enough. int seems to always be the default.
The byte type is almost never used, except when you're serializing stuff to a byte stream, like you might do when communicating between client and server. In those cases, you won't often see just a single byte, but a byte array.
On the floating point side, float used to be more common, but in recent years (the last decade or two) as memory has become very cheap, double has become the norm. I personally use double whenever I have to choose between the two. The only time I end up using float is when something I'm working with requires it. And when you're working with DirectX and OpenGL, that's the size of data that they always use. So that's when I'll drop down to float.
I've only really found one or two uses for decimal. It's a type that's designed for difficult calculations involving money. Interest calculations and things like that. It has a much smaller range than both double and decimal, but with way more precision than either.
So.
Bottom line: Most programmers will default to int in the integer world and double in the floating point world. Various other things might force you to consider using smaller or larger ranges in certain scenarios, which would drive you to one of the alternative choices, depending on the specifics of your situation.
Hey RB! I just want to say that I am 31 years old, and not working at the moment… I have never been to college, and I also still live with my mother and step father. I have had a really hard time growing up and made some big mistakes hanging out with the wrong type of people, and got deep into drugs. But I have been sober since August 13th 2015, and now that I am sober and finally done with going to meetings and everything, it is time that I start my future, because I have waited long enough. So since Comp-Sci has been my passion since I was 11, I have finally decided to learn how to program and make my own apps/programs. Your beginners guides were introduced to me a few days ago in the Unity Official Discord, and ever since… I have been SO happy that I finally found a simple, yet detailed, and comprehensive guide.
I just want to say thank you SO much! I truly feel excited about the future now, and I feel motivated, and I cannot wait to see where this trail leads me!
Thanks again,
Jon
Sorry for not replying sooner. (It says "guest" by your name, and I think that likely means you won't even see this reply, unfortunately.) But I'm very happy to hear this. I'm glad it's making a difference for you!