Checking if a code is secure by counting the usage of memcpy is pretty stupid, but very easy, so it is done by checklist experts to check for security.
memcpy_s is not supported by glibc and most other libc implementations for Linux, I do not expect this to change in the future. Here is a good analyses of this optional extension to the C standard from a glibc developer:
http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1969.htm
All standard Linux tools use the "unsafe" functions from the libc.
You should not use gets(), there are better alternatives in your libc. ;-)
Changing an existing code base from the normal glibc C functions like memcpy to memcpy_s is not easy, you do it wrong in 10% or more if you are not the original author of this code or do not have very long experience with it. Even when you are an expert a lot of problems are getting introduced, this is from my own experience. This is not a search and replace task for a junior!
Having one team developing something and then an other team making it secure is not working from my point of view. You should teach all your developers what they have to look for and why. I think it is important to not only says, X, Y, Z, is banned, but also why exactly and how to solve the use case X, Y, Z were used for.
A lot of the security work is not to improve the security, but for compliance to some guidelines internally or externally. The compliance is checked with some tools, like checking if "grep memcpy(" finds a result. Then the engineer or his manager will use the cheapest solution to solve this like Huawei did here.
To improve the real security you need some experts looking with the original developer into the real code. These experts do need more experience than just good PPT and Excel skills, but they need some knowledge in such software, probably different people for an embedded controller than a node.js application.
I've heard "the _s stands for stupid" in reference to those functions. They've never made sense to me, and definitely look like the product of those "checklist experts" --- in which case it's no surprise that intelligent people will naturally find equally stupid ways around it.
To improve the real security you need some experts looking with the original developer into the real code. These experts do need more experience than just good PPT and Excel skills, but they need some knowledge in such software, probably different people for an embedded controller than a node.js application.
In short: There is no replacement for intelligence.
Yet, there is plenty of corporate propaganda (for lack of better term) that advocates the dumbing-down and treatment of developers like they're replaceable.
> Yet, there is plenty of corporate propaganda (for lack of better term) that advocates the dumbing-down and treatment of developers like they're replaceable.
why does this happen, and what can we do to stop it?
First, to simply replace strcpy with strcpy_s is not possible on purpose. He cites that as disadvantage, but the advantage is that the user has to check for the error value and do something. Before the code ignored any errors and was happy with either UB, SEGV's or silent overwrites. Now some action has to be performed in the error case. The _s functions are not to be used as nested expressions.
The 2nd argument, that error callbacks are insecure, is also bogus. The current practice of env callbacks and overrides are much more insecure and slower. Look at the myriad of nonsense glibc put into their dynamic linker or malloc. LD_ this and MALLOC_ that. Same as a hacker can redirect the cb function to his, he can overwrite the process env block to set some evil LD_LIBRARY_PATH and load the evil .so. There are about 20 of those, safeclib has two.
safeclib is also not slower on good compilers. On recent clang it's even faster than glibc. glibc likes to break the optimization boundaries with asm, safec uses macros for zero-cost compile time checks, and proper code which can be easily inlined and tree optimized. That's why it can be safer and faster than glibc.
glibc does not support strings, only raw memory buffers, length or zero delimited. Strings are unicode nowadays. glibc does not care about strings and about its security implications. safec does.
In general, I have found any sort of kneejerk attempts of auditing “unsafe” functions and replacing them with “safe” alternatives to not really go all that well. Often, the people pushing the replacements actually have no idea about the supposedly “safe” replacements (almost all of them suck in at least one major way) and the people doing the replacements even less so. So people waste time and make their programs slower and harder to read for little benefit.
See the safeclib docs and tests. I'm the maintainer. Only a couple do not conform.
Their SecureMemset variant is insecure. Most crypto memset_s implementations are unsafe, but they don't want to flush their cache, so attackers can look at the cache for the secrets.
You can link one of these libraries against your code on Linux to get the Safe C functions: https://github.com/rurban/safeclib https://sourceforge.net/p/safeclib/code/ci/master/tree/
Changing an existing code base from the normal glibc C functions like memcpy to memcpy_s is not easy, you do it wrong in 10% or more if you are not the original author of this code or do not have very long experience with it. Even when you are an expert a lot of problems are getting introduced, this is from my own experience. This is not a search and replace task for a junior!
Having one team developing something and then an other team making it secure is not working from my point of view. You should teach all your developers what they have to look for and why. I think it is important to not only says, X, Y, Z, is banned, but also why exactly and how to solve the use case X, Y, Z were used for.
A lot of the security work is not to improve the security, but for compliance to some guidelines internally or externally. The compliance is checked with some tools, like checking if "grep memcpy(" finds a result. Then the engineer or his manager will use the cheapest solution to solve this like Huawei did here.
To improve the real security you need some experts looking with the original developer into the real code. These experts do need more experience than just good PPT and Excel skills, but they need some knowledge in such software, probably different people for an embedded controller than a node.js application.