You're actually hitting on a very big concept here. It is actually not very simple to choose the right data type. It takes practice, experience, and doing it the wrong way once or twice.
I can give you a few pointers, though:
1. Don't stress too much. No matter what, you'll get it wrong on occasion.
2. When it doubt, go with int. It's a good default. It's big enough to cover most things without taking up too much space. Also, most things play nice with the int type, as we saw in the example you brought up.
3. Memorize the number of bytes each type has: byte is 1, short is 2, int is 4, long is 8. Then memorize (approximately) the range that this means: byte is 0-255, short is +/- 32000, int is +/- 2 billion, and long is +/- 9 quintillion (a.k.a. "really big").
4. The idea is to choose the smallest one that you're 100% certain is big enough. If you know there is genuinely a hard limit, it's easier to pick a size than if you're thinking, "I doubt my players will score more than 32,000." Some day, somebody is going to get a freakishly high score. Do you know what happens when they surpass the upper limit? It wraps around. Their freakishly high score becomes a freakishly low score.
5. Err on the side of too big, rather than too small. Memory is cheap these days. It's better to be a memory hog than have to completely refactor your code because you chose the wrong size to begin with.
The same kind of analysis leads me to choose double over float and decimal by default.
But like I said, it usually just takes time, practice, and your own mistakes to really start to get the hang of it.
If you're wondering how to know what type you'll get when you do different operations… well… If you're doing operations with longs, you get a long back. If you do operations with any thing smaller than an int (or an int) it will automatically turn things to ints to do this computation. (So the rule is basically, everything smaller than an int gets turned into an int to do math.) There are two reasons for this. First, most processors do arithmetic with ints. The compiler needs it, so it makes some sense to follow what the hardware expects. Second, it is really easy to overflow (go beyond the range allowed) bytes and shorts. It's a lot rarer to overflow the int or long types.
It's worth mentioning that this same thing doesn't happen with floating point types like float, double, and decimal. If you multiply two floats, you'll get a float back. (Unless you mix two types, then the smaller type is promoted to the larger type automatically. I.e., a float plus a double will convert the float to a double, do the addition as doubles, and give you back a double.
Geez… as I sit here and try to outline all of the possible ways things could happen, it even starts to overwhelm me! Hopefully, I haven't confused you too much…