Is higher bit depth better?
The higher the bit depth, the more data will be captured to more accurately re-create the sound. If the bit depth is too low, information will be lost, and the reproduced sample will be degraded. For perspective, each sample recorded at 16-bit resolution can contain any one of 65,536 unique values (216).
The only real benefit of 32-bit audio is the added headroom when it comes to editing. While you get less distortion with 32-bit audio, you have enough headroom with 24-bit audio with room to spare. The differences between bit depths are inaudible and not really worth the hype.
A sampling with 24-bit depth can store more nuances and hence, more precise than a sampling with 16-bit depth. To be more explicit, let's see what is the maximum number of values each bit depth can store. You can see the huge difference in the number of possible values between the two bit depth.
A higher bit recording is going to record more information about the sound, therefore much more accurate representation. A higher bit depth also has an “unpractical” ability to record quieter sound source without getting those sound lost in the noise floor.
As one might expect, a higher bitrate improves the quality of a video. Higher bitrate correlates with higher image quality, while lower bitrate results in a lower quality. Twitch streamers want to strive for a higher bitrate, as it means that their stream's video quality will be better.
So, you should use a bit depth of 24 bits or above, and 16 bits for those final renders. Generally, it is impossible to notice the difference between 24 and 32-bit audio, but 32 bit will prevent waveforms from losing any data when they clip, so it is worth it for high fidelity productions.
The 30 and 36 bits per pixel settings are used for TVs that support “Deep Color.” Most modern HDTVs support this. While 36 bits per pixel is technically the “best option,” there is currently no gaming or movie content that is more than 24 bits per pixel.
As mentioned above, with a higher color depth it requires more system resources that make the computer work more. If your computer is running low on memory, it may slow down the system. Also, with gaming a higher color depth may decrease your FPS depending on your video card and the game you are playing.
While a 16-bit processor can simulate 32-bit arithmetic using double-precision operands, 32-bit processors are much more efficient. While 16-bit processors can use segment registers to access more than 64K elements of memory, this technique becomes awkward and slow if it must be used frequently.
An important thing to note is that the higher the bit depth, the lower the level of the noise floor. As such, a higher bit depth provides greater dynamic range.
Is 8 or 16-bit depth better?
However, the practical answer is that 16-bit color is better to work with, except for a few specific situations. In most cases it's better to work with 16-bit color because you'll have more options available in Photoshop, your computer will be faster, and the files will be smaller.
Bit depth refers to the number of colours that can be displayed by a digital device. The higher the bit depth, the more colours used in the image and, consequently, the larger the file size. JPEG images are always recorded with 8-bit depth. This means the files can record 256 (28) levels of red, green and blue.

Very small changes in the input signal produce big changes in the quantized version. This is the rounding error in action, which has the effect of amplifying small-signal noise. So once again, noise becomes louder as bit-depth decreases.
Using a higher sample rate with your audio music recording can prevent aliasing problems that are common with cymbals, brass, and some string instruments. A sample rate that's moderately higher can also smooth out high frequency filters.
Pros - More dynamic range, lower noise floor, less need for "hot" levels and thus less chance of clipping, distortion, etc. Cons - Takes up more hard drive space. Advice - Work at 24-bit or 32-bit floating point, there are really no compelling arguments for working at 16-bit resolution anymore.
Similarly, streaming videos of a higher resolution and frame rate leads to better looking videos but requires a higher bitrate. In short, both play a pivotal role in the viewing experience and as such, neither can be conclusively said to be more important than the other.
Sample rate and bit depth are two values that you've likely noticed within your digital audio workstation's export settings. Sample rate refers to the number of samples an audio file carries per second, while bit depth dictates the amplitude resolution of audio files.
For consumer/end-user applications, a bit depth of 16 bits is perfectly fine. For professional use (recording, mixing, mastering or professional video editing) a bit depth of 24 bits is better.
In a 10-bit system, you can produce 1024 x 1024 x 1024 = 1,073,741,824 colors which is 64 times of the colors of the 8-bit. What is more shocking is that a 12-bit system is able to produce a whopping 4096 x 4096 x 4096 = 68,719,476,736 colors!
The higher quality of 10-bit video also means the files it creates are comparatively larger than 8-bit videos, so they take up more space in storage and more processing power when editing. The extra quality can be worth it, but only if it's required in your workflow.
What is the best color depth?
A better option would be “30-48 bits” (aka “Deep Color”), which is 10-16 bits/channel -with anything over 10 bits/channel being overkill for display in my opinion.
Editing images in 16/48-bits produces the highest quality results; you can save images in 8/24 bits after editing is complete.
True color (24-bit)
As of 2018, 24-bit color depth is used by virtually every computer and phone display and the vast majority of image storage formats. Almost all cases of 32 bits per pixel assigns 24 bits to the color, and the remaining 8 are the alpha channel or unused. 224 gives 16,777,216 color variations.
Do I Need a 10-Bit Monitor for Gaming? Yes, and to be honest, you should aim to get one anyway. As we just said, 8-bit color is very 1980s. In an age of 4K HDR you really want to have a 10-bit color depth display to get the benefit of modern graphics and content.
The bit depth will not affect the CPU performance, in a computer all operations are made on 32-bit integer or 32-/64-bit float, the CPU and the floating point unit are designed for that. Thus, you'll have no problems dealing with 24-, or even 32-bit floating point files.