## Building (parts of) yugabyte locally Yugabyte is best built inside of a docker container for the correct runtime environment mimicking the one the actual release is built with itself. I found a suitable build environment by looking at the yb's CI scripts: https://github.com/yugabyte/yugabyte-db/blob/master/.github/workflows/build.yml. ### Steps Check out the source code. Make sure build was clean (not initialized in local environment), otherwise move or delete `build` dir From inside the source directory, start a build shell like this: ```sh docker run --rm -ti -v "$PWD":/opt/yb-build/yugabyte-db yugabyteci/yb_build_infra_centos7:v2021-03-26T05_02_29 ``` Go to the build directory: ```sh cd /opt/yb-build/yugabyte-db ``` In the shell, run ```sh ./yb_build.sh release --download-thirdparty --ninja --cmake-only ``` to setup the build. Then you can use `ninja` commands to build submodules of yb: ```sh cd /opt/yb-build/yugabyte-db/build/release-gcc-dynamic-ninja ninja yb_util # or ninja rocksdb ``` The built artifacts can be found in the `bin` and `lib` directories. If you built only a submodule you can hot patch it on the server by just copying the `.so` file for a submodule that you just built. (As an additional check, compare the sizes of the original and the new `.so` file). ### Finding ninja targets The mapping between source directories and target modules might not be obvious. To get at least a list of supported output modules you can use `ninja -t targets`. To test whether you got the right module you can add an error to a source file to see if the target build fails (i.e. contains the source file in question). ### Building a full release You can use ```sh ./yb_build.sh release --download-thirdparty --ninja ``` to build the full release but that will take a long time (took ~30min for me).