diff --git a/docs/workflow/README.md b/docs/workflow/README.md index 332b5a6298122..78b4fd3f19bef 100644 --- a/docs/workflow/README.md +++ b/docs/workflow/README.md @@ -1,106 +1,162 @@ # Workflow Guide -- [Build Requirements](#build-requirements) -- [Getting Yourself Started](#getting-yourself-started) -- [Configurations and Subsets](#configurations-and-subsets) - - [What does this mean for me?](#what-does-this-mean-for-me) -- [Full Instructions on Building and Testing the Runtime Repo](#full-instructions-on-building-and-testing-the-runtime-repo) +- [Introduction](#introduction) +- [Important Concepts to Understand](#important-concepts-to-understand) + - [Build Configurations](#build-configurations) +- [Building the Repo](#building-the-repo) + - [General Overview](#general-overview) + - [Get Started on your Platform and Components](#get-started-on-your-platform-and-components) + - [General Recommendations](#general-recommendations) +- [Testing the Repo](#testing-the-repo) + - [Performance Analysis](#performance-analysis) - [Warnings as Errors](#warnings-as-errors) - [Submitting a PR](#submitting-a-pr) -- [Triaging errors in CI](#triaging-errors-in-ci) +- [Triaging Errors in CI](#triaging-errors-in-ci) -The repo can be built for the following platforms, using the provided setup and the following instructions. Before attempting to clone or build, please check the requirements that match your machine, and ensure you install and prepare all as necessary. +## Introduction -## Build Requirements +The runtime repo can be worked with on Windows, Linux, macOS, and FreeBSD. Each platform has its own specific requirements to work properly, and not all architectures are supported for dev work. That said, the builds can target a wider range of platforms beyond the ones mentioned earlier. You can see it as there are always two platforms at play whenever you are working with builds in the runtime repo: + +- **The Build Platform:** This is the platform of the machine where you cloned the runtime repo and therefore where all your build tools are running on. The following table shows the OS and architecture combinations that we currently support, as well as links to each OS's requirements doc. If you are using WSL directly (i.e. not Docker), then follow the Linux requirements doc. | Chip | Windows | Linux | macOS | FreeBSD | -| :---- | :------: | :------: | :------: | :------: | +| :---: | :------: | :------: | :------: | :------: | | x64 | ✔ | ✔ | ✔ | ✔ | | x86 | ✔ | | | | | Arm32 | | ✔ | | | | Arm64 | ✔ | ✔ | ✔ | | | | [Requirements](requirements/windows-requirements.md) | [Requirements](requirements/linux-requirements.md) | [Requirements](requirements/macos-requirements.md) | [Requirements](requirements/freebsd-requirements.md) +- **The Target Platform:** This is the platform you are building the artifacts for, i.e. the platform you intend to run your builds on. + +The *Build Platform* and the *Target Platform* can be either the same as or different from each other. The former scenario is straightforward, as you will likely be doing all the work on the same machine. In the latter scenario, the process is called *cross-compiling*. There are certain workflows that require you to follow this process, as it is not possible to build the repo directly on those platforms (e.g., Web Assembly (WASM), Browser, Mobiles). The full instructions on how to work with this are detailed in the building docs later on. + Additionally, keep in mind that cloning the full history of this repo takes roughly 400-500 MB of network transfer, inflating to a repository that can consume somewhere between 1 to 1.5 GB. A build of the repo can take somewhere between 10 and 20 GB of space for a single OS and Platform configuration depending on the portions of the product built. This might increase over time, so consider this to be a minimum bar for working with this codebase. -## Getting Yourself Started +The runtime repo consists of three major components: + +- The Runtimes (CoreCLR and Mono) +- The Libraries +- The Hosts and Installers + +You can run your builds from a regular terminal, from the root of the repository. Sudo and administrator privileges are not needed for this. + +- For instructions on how to edit code and make changes, see [Editing and Debugging](/docs/workflow/editing-and-debugging.md). +- For instructions on how to debug CoreCLR, see [Debugging CoreCLR](/docs/workflow/debugging/coreclr/debugging-runtime.md). +- For instructions on using GitHub Codespaces, see [Codespaces](/docs/workflow/Codespaces.md). + +## Important Concepts to Understand + +The following sections describe some important terminology to keep in mind while working with runtime repo builds. For more information, and a complete list of acronyms and their meanings, check out the glossary [over here](/docs/project/glossary.md). + +### Build Configurations + +To work with the runtime repo, there are three supported configurations (one is *CoreCLR* exclusive) that define how your build will behave: + +- **Debug**: Non-optimized code. Asserts are enabled. This configuration runs the slowest. As its name suggests, it provides the best experience for debugging the product. +- **Checked** *(CoreCLR runtime exclusive)*: Optimized code. Asserts are enabled. +- **Release**: Optimized code. Asserts are disabled. Runs at the best speed, and is most suitable for performance profiling. This will impact the debugging experience however, due to compiler optimizations that make understanding what the debugger shows difficult, relative to the source code. + +### Build Components + +- **Runtime**: The execution engine for managed code. There are two different implementations, both written in C or C++: + - *CoreCLR*: The comprehensive execution engine originally born from .NET Framework. Its source code lives under the [src/coreclr](/src/coreclr) subtree. + - *Mono*: A slimmer runtime than CoreCLR, originally born open-source to bring .NET and C# support to non-Windows platforms. Due to its lightweight nature, it is less affected in terms of speed when working with the *Debug* configuration. Its source code lives under the [src/mono](/src/mono) subtree. + +- **CoreLib** *(also known as System.Private.CoreLib)*: The lowest level managed library. It is directly related to the runtime, which means it must be built in the matching configuration (e.g. Building a *Debug* runtime means *CoreLib* must also be in *Debug*). The `clr` subset includes both, the *Runtime* and the *CoreLib* components, so you usually don't have to worry about that. There are, however, some special cases where you might need to build the components separately. The runtime agnostic code for this library can be found at [src/libraries/System.Private.CoreLib/src](/src/libraries/System.Private.CoreLib/src/README.md). + +- **Libraries**: The bulk of dll's providing the rest of the functionality to the runtime. The libraries can be built in their own configuration, regardless of which one the runtime is using. Their source code lives under the [src/libraries](/src/libraries) subtree. + +## Building the Repo + +The main script that will be in charge of most of the building you might want to do is the `build.sh`, or `build.cmd` on Windows, located at the root of the repo. This script receives as arguments the subset(s) you might want to build, as well as multiple parameters to configure your build, such as the configuration, target operating system, target architecture, and so on. + +**NOTE:** If you plan on using Docker to work on the runtime repo, read [this doc](/docs/workflow/using-docker.md) first. It explains how to set up, as well as the images and containers to prepare you to follow the building and testing instructions in the next sections. + +### General Overview + +Running the script (`build.sh`/`build.cmd`) with no arguments will build the whole repo in *Debug* configuration, for the OS and architecture of your machine. A typical dev workflow only one or two components at a time, so it is more efficient to just build those. This is done by means of the `-subset` flag. For example, for CoreCLR, it would be: + +```bash +./build.sh -subset clr +``` + +The main subset values are: + +- `Clr`: The full CoreCLR runtime, which consists of the runtime itself and the CoreLib components. +- `Libs`: All the libraries components, excluding their tests. This includes the libraries' native parts, refs, source assemblies, and their packages and test infrastructure. +- `Packs`: The shared framework packs, archives, bundles, installers, and the framework pack tests. +- `Host`: The .NET hosts, packages, hosting libraries, and their tests. +- `Mono`: The Mono runtime and its CoreLib. -The runtime repo can be built from a regular, non-administrator command prompt, from the root of the repo. +Some subsets are subsequently divided into smaller pieces, giving you more flexibility as to what to build/rebuild depending on what you're working on. For a full list of all the supported subsets, run the build script, passing `help` as the argument to the `subset` flag. -The repository currently consists of three different major parts: +It is also possible to build more than one subset under the same command-line. In order to do this, you have to link them together with a `+` sign in the value you're passing to `-subset`. For example, to build both, CoreCLR and Libraries in Release configuration, the command-line would look like this: -* The Runtimes -* The Libraries -* The Installer +```bash +./build.sh -subset clr+libs -configuration Release +``` -More info on this, as well as the different build configurations in the [Configurations and Subsets section](#configurations-and-subsets). +If you require to use different configurations for different subsets, there are some specific flags you can use: -This was a concise introduction and now it's time to show the specifics of building specific subsets in any given supported platform, since most likely you will want to customize your builds according to what component(s) you're working on, as well as how you configured your build environment. We have links to instructions depending on your needs [in this section](#full-instructions-on-building-and-testing-the-runtime-repo). +- `-runtimeConfiguration (-rc)`: The CoreCLR build configuration +- `-librariesConfiguration (-lc)`: The Libraries build configuration +- `-hostConfiguration (-hc)`: The Host build configuration -* For instructions on how to edit code and make changes, see [Editing and Debugging](editing-and-debugging.md). -* For instructions on how to debug CoreCLR, see [Debugging CoreCLR](/docs/workflow/debugging/coreclr/debugging-runtime.md). -* For instructions on using GitHub Codespaces, see [Codespaces](/docs/workflow/Codespaces.md). +The behavior of the script is that the general configuration flag `-c` affects all subsets that have not been qualified with a more specific flag, as well as the subsets that don't have a specific flag supported, like `packs`. For example, the following command-line would build the libraries in *Release* mode and the runtime in *Debug* mode: -## Configurations and Subsets +```bash +./build.sh -subset clr+libs -configuration Release -runtimeConfiguration Debug +``` -You may need to build the tree in a combination of configurations. This section explains why. +In this example, the `-lc` flag was not specified, so `-c` qualifies `libs`. In the first example, only `-c` was passed, so it qualifies both, `clr` and `libs`. - -A quick reminder of some concepts -- see the [glossary](/docs/project/glossary.md) for more on these: +As an extra note here, if your first argument to the build script are the subsets, you can omit the `-subset` flag altogether. Additionally, several of the supported flags also include a shorthand version (e.g. `-c` for `-configuration`). Run the script with `-h` or `-help` to get an extensive overview on all the supported flags to customize your build, including their shorthand forms, as well as a wider variety of examples. -* **Debug Configuration** -- Non-optimized code. Asserts are enabled. -* **Checked Configuration** -- Optimized code. Asserts are enabled. _Only relevant to CoreCLR runtime._ -* **Release Configuration** -- Optimized code. Asserts are disabled. Runs at the best speed, and suitable for performance profiling. This will impact the debugging experience due to compiler optimizations that make understanding what the debugging is showing difficult to reason about, relative to the source code. +**NOTE:** On non-Windows systems, the longhand versions of the flags can be passed with either single `-` or double `--` dashes. -When we talk about mixing configurations, we're discussing the following sub-components: +### Get Started on your Platform and Components - -* **Runtime** is the execution engine for managed code and there are two different implementations available. Both are written in C/C++, therefore, easier to debug when built in a Debug configuration. - * CoreCLR is the comprehensive execution engine which, if built in Debug Configuration, executes managed code very slowly. For example, it will take a long time to run the managed code unit tests. The code lives under [src/coreclr](/src/coreclr). - * Mono is a portable and also slimmer runtime and it's not that sensitive to Debug Configuration for running managed code. You will still need to build it without optimizations to have good runtime debugging experience though. The code lives under [src/mono](/src/mono). -* **CoreLib** (also known as System.Private.CoreLib) is the lowest level managed library. It has a special relationship to the runtimes and therefore it must be built in the matching configuration, e.g., if the runtime you are using was built in a Debug configuration, this must be in a Debug configuration. The runtime agnostic code for this library can be found at [src/libraries/System.Private.CoreLib/src](/src/libraries/System.Private.CoreLib/src/README.md). -* **Libraries** is the bulk of the dlls that are oblivious to the configuration that runtimes and CoreLib were built in. They are most debuggable when built in a Debug configuration, and happily, they still run sufficiently fast in that configuration that it's acceptable for development work. The code lives under [src/libraries](/src/libraries). +Now that you've got the general idea on how to get started, it is important to mention that, while the procedure is very similar among platforms and subsets, each component has its own technicalities and details, as explained in their own specific docs: - -To build just one part of the repo, you add the `-subset` flag with the subset you wish to build to the root build script _(build.cmd/sh)_. You can specify more than one by linking them with the `+` operator (e.g. `-subset clr+libs` would build CoreCLR and the libraries). Note that if the subset is the first argument you pass to the script, you can omit the `--subset` flag altogether. +**Component Specifics:** -### What does this mean for me? +- [CoreCLR](/docs/workflow/building/coreclr/README.md) +- [Libraries](/docs/workflow/building/libraries/README.md) +- [Mono](/docs/workflow/building/mono/README.md) -At this point you probably know what you are planning to work on primarily: the runtimes or libraries. As general suggestions on how to proceed, here are some ideas: +**NOTE:** *NativeAOT* is part of CoreCLR, but it has its own specifics when it comes to building. We have a separate doc dedicated to it [over here](/docs/workflow/building/coreclr/nativeaot.md). -* If you're working in runtimes, you may want to build everything in the Debug configuration, depending on how comfortable you are debugging optimized native code. -* If you're working in libraries, you will want to use debug libraries with a release version of runtime and CoreLib, because the tests will run faster. -* If you're working in CoreLib - you probably want to try to get the job done with release runtime and CoreLib, and fall back to debug if you need to. The [Building Libraries](/docs/workflow/building/libraries/README.md) document explains how you'll do this. +### General Recommendations -## Full Instructions on Building and Testing the Runtime Repo +- If you're working with the runtimes, then the usual recommendation is to build everything in *Debug* mode. That said, if you know you won't be debugging the libraries source code but will need them (e.g. for a *Core_Root* build), then building the libraries on *Release* instead will provide a more productive experience. +- The counterpart to the previous point: When you are working in libraries. In this case, it is recommended to build the runtime on *Release* and the libraries on *Debug*. +- If you're working on *CoreLib*, then you probably want to try to get the job done with a *Release* runtime, and fall back to *Debug* if you need to. -Now you know about configurations and how we use them, so now you will want to read how to build what you plan to work on. Each of these will have further specific instructions or links for whichever platform you are developing on. +## Testing the Repo -* [Building CoreCLR runtime](/docs/workflow/building/coreclr/README.md) -* [Building Mono runtime](/docs/workflow/building/mono/README.md) -* [Building NativeAOT runtime](/docs/workflow/building/coreclr/nativeaot.md) -* [Building Libraries](/docs/workflow/building/libraries/README.md) +Building the components of the repo is just part of the experience. The runtime repo also includes vast test suites you can run to ensure your changes work properly as expected and don't inadvertently break something else. Each component has its own methodologies to run their tests, which are explained in their own specific docs: -After that, here's information about how to run tests: +- [CoreCLR](/docs/workflow/testing/coreclr/testing.md) + - [NativeAOT](/docs/workflow/building/coreclr/nativeaot.md#running-tests) +- [Libraries](/docs/workflow/testing/libraries/testing.md) +- [Mono](/docs/workflow/testing/mono/testing.md) -* [Testing CoreCLR runtime](/docs/workflow/testing/coreclr/testing.md) -* [Testing Mono runtime](/docs/workflow/testing/mono/testing.md) -* [Testing NativeAOT runtime](/docs/workflow/building/coreclr/nativeaot.md#running-tests) -* [Testing Libraries](/docs/workflow/testing/libraries/testing.md) +### Performance Analysis -And how to measure performance: +Fixing bugs and adding new features aren't the only things to work on in the runtime repo. We also have to ensure performance is kept as optimal as can be, and that is done through benchmarking and profiling. If you're interested in conducting these kinds of analysis, the following links will show you the usual workflow you can follow: -* [Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md) -* [Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md) +* [Benchmarking Workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md) +* [Profiling Workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md) ## Warnings as Errors -The repo build treats warnings as errors. Dealing with warnings when you're in the middle of making changes can be annoying (e.g. unused variable that you plan to use later). To disable treating warnings as errors, set the `TreatWarningsAsErrors` environment variable to `false` before building. This variable will be respected by both the `build.sh`/`build.cmd` root build scripts and builds done with `dotnet build` or Visual Studio. Some people may prefer setting this environment variable globally in their machine settings. +The repo build treats warnings as errors, including many code-style warnings. Dealing with warnings when you're in the middle of making changes can be annoying (e.g. unused variable that you plan to use later). To disable treating warnings as errors, set the `TreatWarningsAsErrors` environment variable to `false` before building. This variable will be respected by both the `build.sh`/`build.cmd` root build scripts and builds done with `dotnet build` or Visual Studio. Some people may prefer setting this environment variable globally in their machine settings. ## Submitting a PR -Before submitting a PR, make sure to review the [contribution guidelines](../../CONTRIBUTING.md). After you get familiarized with them, please read the [PR guide](ci/pr-guide.md) to find more information about tips and conventions around creating a PR, getting it reviewed, and understanding the CI results. +Before submitting a PR, make sure to review the [contribution guidelines](/CONTRIBUTING.md). After you get familiarized with them, please read the [PR guide](/docs/workflow/ci/pr-guide.md) to find more information about tips and conventions around creating a PR, getting it reviewed, and understanding the CI results. -## Triaging errors in CI +## Triaging Errors in CI -Given the size of the runtime repository, flaky tests are expected to some degree. There are a few mechanisms we use to help with the discoverability of widely impacting issues. We also have a regular procedure that ensures issues get properly tracked and prioritized. You can find more information on [triaging failures in CI](ci/failure-analysis.md). +Given the size of the runtime repository, flaky tests are expected to some degree. There are a few mechanisms we use to help with the discoverability of widely impacting issues. We also have a regular procedure that ensures issues get properly tracked and prioritized. You can find more information on [triaging failures in CI](/docs/workflow/ci/failure-analysis.md). diff --git a/docs/workflow/building/coreclr/README.md b/docs/workflow/building/coreclr/README.md index fbc1eab77c728..fc1b7aec22567 100644 --- a/docs/workflow/building/coreclr/README.md +++ b/docs/workflow/building/coreclr/README.md @@ -1,100 +1,156 @@ -# Building CoreCLR +# Building CoreCLR Guide -* [Introduction](#introduction) -* [Common Building Options](#common-building-options) - * [Build Drivers](#build-drivers) - * [Extra Flags](#extra-flags) - * [Build Results Layout](#build-results-layout) -* [Platform-Specific Instructions](#platform-specific-instructions) -* [Testing CoreCLR](#testing-coreclr) +- [The Basics](#the-basics) + - [Build Results](#build-results) + - [What to do with the Build](#what-to-do-with-the-build) + - [The Core_Root for Testing Your Build](#the-core-root-for-testing-your-build) + - [The Dev Shipping Packs](#the-dev-shipping-packs) + - [Cross Compilation](#cross-compilation) +- [Other Features](#other-features) + - [Build Drivers](#build-drivers) + - [Extra Flags](#extra-flags) + - [Native ARM64 Building on Windows](#native-arm64-building-on-windows) + - [Debugging Information for macOS](#debugging-information-for-macos) + - [Native Sanitizers](#native-sanitizers) -## Introduction +Firstly, make sure you've prepared your environment and installed all the requirements for your platform. If not, follow this [link](/docs/workflow/README.md#introduction) for the corresponding instructions. -Here is a brief overview on how to build the common form of CoreCLR in general. For further specific instructions on each platform, we have links to instructions later on in [Platform-Specific Instructions](#platform-specific-instructions). +## The Basics -To build just CoreCLR, use the `subset` flag to the `build.sh` or `build.cmd` script at the repo root. Note that specifying `-subset` explicitly is not necessary if it is the first argument (i.e. `./build.sh --subset clr` and `./build.sh clr` are equivalent). However, if you specify any other argument beforehand, then you must specify the `-subset` flag. - -For Linux and macOS: +As explained in the main workflow [*README*](/docs/workflow/README.md), you can build the CoreCLR runtime by passing `-subset clr` as argument to the repo's main `build.sh`/`build.cmd` script: ```bash -./build.sh --subset clr +./build.sh -subset clr ``` -For Windows: +By default, the script builds the _clr_ in *Debug* configuration, which doesn't have any optimizations and has all assertions enabled. If you're aiming to run performance benchmarks, make sure you select the *Release* version with `-configuration Release`, as that one generates the most optimized code. On the other hand, if your goal is to run tests, then you can take the most advantage from CoreCLR's exclusive *Checked* configuration. This one retains the assertions but has the native compiler optimizations enabled, thus making it run faster than *Debug*. This is the usual mode used for running tests in the CI pipelines. -```cmd -.\build.cmd -subset clr -``` +### Build Results -## Common Building Options +Once the `clr` build completes, the main generated artifacts are placed in `artifacts/bin/coreclr/..`. For example, for a Linux x64 Release build, the output path would be `artifacts/bin/coreclr/linux.x64.Release`. Here, you will find a number of different binaries, of which the most important are the following: -By default, the script generates a _Debug_ build type, which is not optimized code and includes asserts. As its name suggests, this makes it easier and friendlier to debug the code. If you want to make performance measurements, you ought to build the _Release_ version instead, which doesn't have any asserts and has all code optimizations enabled. Likewise, if you plan on running tests, the _Release_ configuration is more suitable since it's considerably faster than the _Debug_ one. For this, you add the flag `-configuration release` (or `-c release`). For example: +- `corerun`: The command-line host executable. This program loads and starts the CoreCLR runtime and receives the managed program you want to run as argument (e.g. `./corerun program.dll`). On Windows, it is called `corerun.exe`. +- `coreclr`: The CoreCLR runtime itself. On Windows, it's called `coreclr.dll`, on macOS it is `libcoreclr.dylib`, and on Linux it is `libcoreclr.so`. +- `System.Private.CoreLib.dll`: The core managed library, containing the definitions of `Object` and the base functionality. -```bash -./build.sh --subset clr --configuration release -``` +All the generated logs are placed in under `artifacts/log`, and all the intermediate output the build uses is placed in the `artifacts/obj/coreclr` directory. -As mentioned before in the [general building document](/docs/workflow/README.md#configurations-and-subsets), CoreCLR also supports a _Checked_ build type which has asserts enabled like _Debug_, but is built with the native compiler optimizer enabled, so it runs much faster. This is the usual mode used for running tests in the CI system. +### What to do with the Build -Now, it is also possible to select a different configuration for each subset when building them together. The `--configuration` flag applies universally to all subsets, but it can be overridden with any one or more of the following ones: +*CoreCLR* is one of the most important components of the runtime repo, as it is one of the main engines of the .NET product. That said, while you can test and use it on its own, it is easiest to do this when used in conjunction with the *Libraries* subset. When you build both subsets, you can get access to the *Core_Root*. This includes all the libraries and the Clr, alongside other tools like *Crossgen2*, *R2RDump*, and the *ILC* compiler, and the main command-line host executable `corerun`, all bundled together. The *Core_Root* is one of the most reliable ways of testing changes to the runtime, running external apps with your build, and it is the way Clr tests are run in the CI pipelines. -* `--runtimeConfiguration (-rc)`: Flag for the CLR build configuration. -* `--librariesConfiguration (-lc)`: Flag for the libraries build configuration. -* `--hostConfiguration (-hc)`: Flag for the host build configuration. +#### The Core_Root for Testing Your Build -For example, a very common scenario used by developers and the repo's test scripts with default options, is to build the _clr_ in _Debug_ mode, and the _libraries_ in _Release_ mode. To achieve this, the command-line would look like the following: +As described in the [workflow README](/docs/workflow/README.md#building-the-repo), you can build multiple subsets by concatenating them with a `+` sign in the `-subset` argument. To prepare to build the *Core_Root*, we need to build the libraries and CoreCLR. Thus, the `-subset` argument would be `clr+libs`. Usually, the recommended workflow is to build the clr in *Debug* configuration and the libraries in *Release*: ```bash -./build.sh --subset clr+libs --configuration Release --runtimeConfiguration Debug +./build.sh -subset clr+libs -runtimeConfiguration Debug -librariesConfiguration Release ``` -Or alternatively: +Once you have both subsets built, you can generate the *Core_Root*, which as mentioned above, is the most flexible way of testing your changes. You can generate the *Core_Root* by running the following command, assuming a *Checked* clr build on an x64 machine: ```bash -./build.sh --subset clr+libs --librariesConfiguration Release --runtimeConfiguration Debug +./src/tests/build.sh -x64 -checked -generatelayoutonly ``` -For more information about all the different options available, supply the argument `-help|-h` when invoking the build script. On Unix-like systems, non-abbreviated arguments can be passed in with a single `-` or double hyphen `--`. +Since this is more related to testing, you can find the full details and instructions in the CoreCLR testing doc [over here](/docs/workflow/testing/coreclr/testing.md). -### Build Drivers +#### The Dev Shipping Packs -If you want to use _Ninja_ to drive the native build instead of _Make_ on non-Windows platforms, you can pass the `-ninja` flag to the build script as follows: +It is also possible to generate the full runtime NuGet packages and installer that you can use to test in a more production-esque scenario. To generate these shipping artifacts, you have to build the `clr`, `libs`, `host`, and `packs` subsets: ```bash -./build.sh --subset clr --ninja +./build.sh -subset clr+libs+host+packs -configuration Release ``` -If you want to use Visual Studio's _MSBuild_ to drive the native build on Windows, you can pass the `-msbuild` flag to the build script similarly to the `-ninja` flag. +The shipping artifacts are placed in the `artifacts/packages//Shipping` directory. Here, you will find several NuGet packages, as well as their respective symbols packages, generated from your build. More importantly, you will find a zipped archive with the full contents of the runtime, organized in the same layout as they are in the official dotnet installations. This archive includes the following files: + +- `host/fxr/-dev/hostfxr` (`hostfxr` is named differently depending on the platform: `hostfxr.dll` on Windows, `libhostfxr.dylib` on macOS, and `libhostfxr.so` on Linux) +- `shared/Microsoft.NETCore.App/-dev/*` (The `*` here refers to all the libraries dll's, as well as all the binaries necessary for the runtime to function) +- `dotnet (dotnet.exe on Windows)` (The main `dotnet` executable you usually use to run your apps) + +Note that this package only includes the runtime, therefore you will only be able to run apps but not build them. For that, you would need the full SDK. + +**NOTE:** On Windows, this will also include `.exe` and `.msi` installers, which you can use in case you want to test your build machine-wide. This is the closest you can get to an official build installation. + +For a full guide on using the shipping packages for testing checkout the doc we have dedicated to it [over here](/docs/workflow/testing/using-dev-shipping-packages.md). -We recommend using _Ninja_ for building the project on Windows since it more efficiently uses the build machine's resources for the native runtime build in comparison to Visual Studio's _MSBuild_. +### Cross Compilation + +Using an x64 machine, it is possible to generate builds for other architectures. Not all architectures are supported for cross-compilation however, and it's also dependent on the OS you are using to build and target. Refer to the table below for the compatibility matrix. + +| Operating System | To x86 | To Arm32 | To Arm64 | +| :--------------: | :------: | :------: | :------: | +| Windows | ✔ | | ✔ | +| macOS | | | ✔ | +| Linux | | ✔ | ✔ | + +**NOTE:** On macOS, it is also possible to cross-compile from ARM64 to x64 using an Apple Silicon Mac. + +Detailed instructions on how to do cross-compilation can be found in the cross-building doc [over here](/docs/workflow/building/coreclr/cross-building.md). + +## Other Features + +### Build Drivers + +By default, the CoreCLR build uses *Ninja* as the native build driver on Windows, and *Make* on non-Windows platforms. You can override this behavior by passing the appropriate flags to the build script: + +To use Visual Studio's *MSBuild* instead of *Ninja* on Windows: + +```cmd +./build.cmd -subset clr -msbuild +``` + +It is recommended to use *Ninja* on Windows, as it uses the build machine's resources more efficiently in comparison to Visual Studio's *MSBuild*. + +To use *Ninja* instead of *Make* on non-Windows: + +```bash +./build.sh -subset clr -ninja +``` ### Extra Flags -To pass extra compiler/linker flags to the coreclr build, set the environment variables `EXTRA_CFLAGS`, `EXTRA_CXXFLAGS` and `EXTRA_LDFLAGS` as needed. Don't set `CFLAGS`/`CXXFLAGS`/`LDFLAGS` directly as that might lead to configure-time tests failing. +You can also pass some extra compiler/linker flags to the CoreCLR build. Set the `EXTRA_CFLAGS`, `EXTRA_CXXFLAGS`, and `EXTRA_LDFLAGS` as you see fit for this purpose. The build script will consume them and then set the environment variables that will ultimately affect your build (i.e. those same ones without the `EXTRA_` prefix). Don't set the final ones directly yourself, as that is known to lead to potential failures in configure-time tests. -### Build Results Layout +### Native ARM64 Building on Windows -Once the build has concluded, it will have produced its output artifacts in the following structure: +Currently, the runtime repo supports building CoreCLR directly on Windows ARM64 without the need to cross-compile, albeit it is still in an experimental phase. To do this, you need to install the ARM64 build tools and Windows SDK for Visual Studio, in addition to all the requirements outlined in the [Windows Requirements doc](/docs/workflow/requirements/windows-requirements.md). -* Product binaries will be dropped in `artifacts\bin\coreclr\..` folder. -* A NuGet package, _Microsoft.Dotnet.CoreCLR_, will be created under `artifacts\bin\coreclr\..\.nuget` folder. -* Test binaries (if built) will be dropped under `artifacts\tests\coreclr\..` folder. However, remember the root build script will not build the tests. The instructions for working with tests (building and running) are [in the testing doc](/docs/workflow/testing/coreclr/testing.md). -* The build places logs in `artifacts\log` and these are useful when the build fails. -* The build places all of its intermediate output in the `artifacts\obj\coreclr` directory. +Once those requirements are fulfilled, you have to tell the build script to compile for Arm64 using *MSBuild*. *Ninja* is not yet supported on Arm64 platforms: -If you want to force a full rebuild of the subsets you specified when calling the build script, pass the `-rebuild` flag to it, in addition to any other arguments you might require. +```cmd +./build.cmd -subset clr -arch arm64 -msbuild +``` -## Platform-Specific Instructions +While this is functional at the time of writing this doc, it is still recommended to cross-compile from an x64 machine, as that's the most stable and tested method. -Now that you've got the general idea on how the _CoreCLR_ builds work, here are some further documentation links on platform-specific caveats and features. +### Debugging Information for macOS -* [Build CoreCLR on Windows](windows-instructions.md) -* [Build CoreCLR on macOS](macos-instructions.md) -* [Build CoreCLR on Linux](linux-instructions.md) -* [Build CoreCLR on FreeBSD](freebsd-instructions.md) +When building on macOS, the build process puts native component symbol and debugging information into `.dwarf` files, one for each built binary. This is not the native format used by macOS, and debuggers like LLDB can't automatically find them. The format macOS uses is `.dSYM` bundles. To generate them and get a better inner-loop developer experience (e.g. have the LLDB debugger automatically find program symbols and display source code lines, etc.), make sure to enable the `DCLR_CMAKE_APPLE_DYSM` flag when calling the build script: -We also have specific instructions for building _NativeAOT_ [here](/docs/workflow/building/coreclr/nativeaot.md). +```bash +./build.sh -subset clr -cmakeargs "-DCLR_CMAKE_APPLE_DYSM=TRUE" +``` + +**NOTE:** Converting the entire build process to build and package `.dSYM` bundles on macOS by default is on the table and tracked by issue #92911 [over here](https://github.com/dotnet/runtime/issues/92911). + +### Native Sanitizers + +CoreCLR is also in the process of supporting the use of native sanitizers during the build to help catch memory safety issues. To apply them, add the `-fsanitize` flag followed by the name of the sanitizer as argument. As of now, these are the supported sanitizers with plans of adding more in the future: + +- Sanitizer Name: `AddressSanitizer` -## Testing CoreCLR + Argument to `-fsanitize`: `address` -For testing your build, the [testing docs](/docs/workflow/testing/coreclr/testing.md) have detailed instructions on how to do it. +| Platform | Support Status | +| :------: | :---------------------: | +| Windows | Regularly Tested on x64 | +| macOS | Regularly Tested on x64 | +| Linux | Regularly Tested on x64 | + +And to use it, the command would look as follows: + +```bash +./build.sh -subset clr -fsanitize address +``` diff --git a/docs/workflow/building/coreclr/linux-instructions.md b/docs/workflow/building/coreclr/linux-instructions.md deleted file mode 100644 index 5fd493c7b6b47..0000000000000 --- a/docs/workflow/building/coreclr/linux-instructions.md +++ /dev/null @@ -1,133 +0,0 @@ -# Build CoreCLR on Linux - -* [Build using Docker](#build-using-docker) - * [Docker Images](#docker-images) -* [Build using your own Environment](#build-using-your-own-environment) - * [Set the maximum number of file-handles](#set-the-maximum-number-of-file-handles) -* [Build the Runtime](#build-the-runtime) - * [Cross-Compilation](#cross-compilation) -* [Create the Core_Root](#create-the-core_root) - -This guide will walk you through building CoreCLR on Linux. - -As mentioned in the [Linux requirements doc](/docs/workflow/requirements/linux-requirements.md), there are two options to build CoreCLR on Linux: - -* Build using Docker. -* Build using your own environment. - -## Build using Docker - -Building using Docker will require that you choose the correct image for your environment. - -Note that the OS is strictly speaking not important. For example if you are on Ubuntu 20.04 and build using the Ubuntu 18.04 x64 image there should be no issues. You can even use Linux images on a Windows OS if you have [WSL](https://learn.microsoft.com/windows/wsl/about) enabled. However, note that you can't run multiple OS's on the same _Docker Daemon_, as it takes resources from the underlying kernel as needed. In other words, you can run either Linux on WSL, or Windows containers. You have to switch between them if you need both, and restart Docker. - -The target architecture is more important, as building arm32 using the x64 image will not work. There will be missing _rootfs_ components required by the build. See [Docker Images](#docker-images) below, for more information on choosing an image to build with. - -**NOTE**: The image's architecture has to match your machine's supported platforms. For example, you can't run arm32 images on an x64 machine. But you could run x64 and arm64 images on an M1 Mac, for example. This is thanks to the _Rosetta_ emulator that Apple Silicon provides. Same case applies to running x86 on an x64 Windows machine thanks to Windows' _SYSWOW64_. Likewise, you can run Linux arm32 images on a Linux arm64 host. - -Please note that choosing the same image as the host OS you are running on will allow you to run the product/tests outside of the docker container you built in. - -Once you have chosen an image, the build is one command run from the root of the runtime repository: - -```bash -docker run --rm \ - -v :/runtime \ - -w /runtime \ - mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-22.04 \ - ./build.sh --subset clr -``` - -Dissecting the command: - -* `--rm`: Erase the created container after use. -* `-v :/runtime`: Mount the runtime repository under `/runtime`. Replace `` with the full path to your `runtime` repo clone, e.g., `-v /home/user/runtime:/runtime`. -* `-w: /runtime`: Set /runtime as working directory for the container. -* `mcr.microsoft.com/dotnet-buildtools/prereqs:centos-7-20210714125435-9b5bbc2`: Docker image name. -* `./build.sh`: Command to be run in the container: run the root build command. -* `-subset clr`: Build the clr subset (excluding libraries and installers). - -To do cross-building using Docker, you need to use either specific images designated for this purpose, or configure your own. Detailed information on this can be found in the [cross-building doc](/docs/workflow/building/coreclr/cross-building.md#cross-building-using-docker). Note that the official build images are all cross-build images, even when targeting the same architecture as the host image. This is because they target versions of glibc or musl libc that are included in the cross-build rootfs, and not the host OS. - -### Docker Images - -The images used for our official builds can be found in [the pipeline resources](/eng/pipelines/common/templates/pipeline-with-resources.yml) of our Azure DevOps builds under the `container` key of the platform you plan to build. Our build infrastructure will automatically use the latest version of the image. - -| Host OS | Target OS | Target Arch | Image | crossrootfs dir | -| --------------------- | ------------ | --------------- | -------------------------------------------------------------------------------------- | -------------------- | -| Azure Linux (x64) | Alpine 3.13 | x64 | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-amd64-alpine` | `/crossrootfs/x64` | -| Azure Linux (x64) | Ubuntu 16.04 | x64 | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-amd64` | `/crossrootfs/x64` | -| Azure Linux (x64) | Alpine | arm32 (armhf) | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-arm-alpine` | `/crossrootfs/arm` | -| Azure Linux (x64) | Ubuntu 16.04 | arm32 (armhf) | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-arm` | `/crossrootfs/arm` | -| Azure Linux (x64) | Alpine | arm64 (arm64v8) | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-arm64-alpine` | `/crossrootfs/arm64` | -| Azure Linux (x64) | Ubuntu 16.04 | arm64 (arm64v8) | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-arm64` | `/crossrootfs/arm64` | -| Azure Linux (x64) | Ubuntu 16.04 | x86 | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-x86` | `/crossrootfs/x86` | - -Notes: - -- All official builds are cross-builds with a rootfs for the target OS, and use the clang version available on the container. -- These images are built using Dockerfiles maintained in the [dotnet-buildtools-prereqs-docker repo](https://github.com/dotnet/dotnet-buildtools-prereqs-docker). - - -The following images are used for more extended scenarios, including for community-supported builds, and may require different patterns of use. - -| Host OS | Target OS | Target Arch | Image | crossrootfs dir | -| --------------------- | -------------------------- | ----------------- | -------------------------------------------------------------------------------------- | ---------------------- | -| Azure Linux (x64) | Android Bionic | x64 | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-android-amd64`| | -| Azure Linux (x64) | Android Bionic (w/OpenSSL) | x64 | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-android-openssl` | | -| Azure Linux (x64) | Android Bionic (w/Docker) | x64 | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-android-docker` | | -| Azure Linux (x64) | Azure Linux 3.0 | x64 | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-fpm` | | -| Azure Linux (x64) | FreeBSD 13 | x64 | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-freebsd-13` | `/crossrootfs/x64` | -| Azure Linux (x64) | Ubuntu 18.04 | PPC64le | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-ppc64le` | `/crossrootfs/ppc64le` | -| Azure Linux (x64) | Ubuntu 24.04 | RISC-V | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-riscv64` | `/crossrootfs/riscv64` | -| Azure Linux (x64) | Ubuntu 18.04 | S390x | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-s390x` | `/crossrootfs/s390x` | -| Azure Linux (x64) | Ubuntu 16.04 (Wasm) | x64 | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-webassembly-amd64` | `/crossrootfs/x64` | -| Debian (x64) | Debian 12 | x64 | `mcr.microsoft.com/dotnet-buildtools/prereqs:debian-12-gcc14-amd64` | `/crossrootfs/armv6` | -| Ubuntu (x64) | Ubuntu 22.04 | x64 | `mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-22.04-debpkg` | | -| Ubuntu (x64) | Tizen 9.0 | Arm32 (armel) | `mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-22.04-cross-armel-tizen` | `/crossrootfs/armel` | -| Ubuntu (x64) | Ubuntu 20.04 | Arm32 (v6) | `mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-20.04-cross-armv6-raspbian-10` | `/crossrootfs/armv6` | - -## Build using your own Environment - -Ensure you have all of the prerequisites installed from the [Linux Requirements](/docs/workflow/requirements/linux-requirements.md). - -### Set the maximum number of file-handles - -To ensure that your system can allocate enough file-handles for the libraries build, run the command in your terminal `sysctl fs.file-max`. If it is less than 100000, add `fs.file-max = 100000` to `/etc/sysctl.conf`, and then run `sudo sysctl -p`. - -## Build the Runtime - -To build CoreCLR on Linux, run `build.sh` while specifying the `clr` subset: - -```bash -./build.sh --subset clr -``` - -After the build is completed, there should be some files placed in `artifacts/bin/coreclr/linux..` (for example `artifacts/bin/coreclr/linux.x64.Release`). The most important binaries are the following: - -* `corerun`: The command line host. This program loads and starts the CoreCLR runtime and passes the managed program (e.g. `program.dll`) you want to run with it. -* `libcoreclr.so`: The CoreCLR runtime itself. -* `System.Private.CoreLib.dll`: The core managed library, containing definitions of `Object` and base functionality. - -### Cross-Compilation - -Just like you can use specialized Docker images, you can also do any of the supported cross-builds for ARM32 or ARM64 on your own Linux environment. Detailed instructions are found in the [cross-building doc](/docs/workflow/building/coreclr/cross-building.md#linux-cross-building). - -## Create the Core_Root - -The Core_Root provides one of the main ways to test your build. Full instructions on how to build it in the [CoreCLR testing doc](/docs/workflow/testing/coreclr/testing.md), and we also have a detailed guide on how to use it for your own testing in [its own dedicated doc](/docs/workflow/testing/using-corerun-and-coreroot.md). - -## Native Sanitizers - -CoreCLR can be built with native sanitizers like AddressSanitizer to help catch memory safety issues. To build the project with native sanitizers, add the `-fsanitize address` argument to the build script like the following: - -```bash -build.sh -s clr -fsanitize address -``` - -When building the repo with any native sanitizers, you should build all native components in the repo with the same set of sanitizers. - -The following sanitizers are supported for CoreCLR on Linux: - -| Sanitizer Name | `-fsanitize` argument | Support Status | -|-----------------|-----------------------|----------------| -| AddressSanitize | `address` | regularly tested on x64 | diff --git a/docs/workflow/building/coreclr/macos-instructions.md b/docs/workflow/building/coreclr/macos-instructions.md deleted file mode 100644 index 7ac0d0c6e0f85..0000000000000 --- a/docs/workflow/building/coreclr/macos-instructions.md +++ /dev/null @@ -1,60 +0,0 @@ -# Build CoreCLR on macOS - -* [Environment](#environment) -* [Build the Runtime](#build-the-runtime) - * [Cross-Compilation](#cross-compilation) -* [Create the Core_Root](#create-the-core_root) - -This guide will walk you through building CoreCLR on macOS. - -## Environment - -Ensure you have all of the prerequisites installed from the [macOS Requirements](/docs/workflow/requirements/macos-requirements.md). - -## Build the Runtime - -To build CoreCLR on macOS, run `build.sh` while specifying the `clr` subset: - -```bash -./build.sh --subset clr -``` - -After the build has completed, there should be some files placed in `artifacts/bin/coreclr/osx..` (for example `artifacts/bin/coreclr/osx.x64.Release`). The most important binaries are the following: - -* `corerun`: The command line host. This program loads and starts the CoreCLR runtime and passes the managed program (e.g. `program.dll`) you want to run with it. -* `libcoreclr.dylib`: The CoreCLR runtime itself. -* `System.Private.CoreLib.dll`: The core managed library, containing definitions of `Object` and base functionality. - -### Cross-Compilation - -It is possible to get a macOS ARM64 build using an Intel x64 Mac and vice versa, an x64 one using an M1 Mac. Instructions on how to do this are in the [cross-building doc](/docs/workflow/building/coreclr/cross-building.md#macos-cross-building). - -## Create the Core_Root - -The Core_Root provides one of the main ways to test your build. Full instructions on how to build it in the [CoreCLR testing doc](/docs/workflow/testing/coreclr/testing.md), and we also have a detailed guide on how to use it for your own testing in [its own dedicated doc](/docs/workflow/testing/using-corerun-and-coreroot.md). - -## Debugging information - -The build process puts native component symbol and debugging information into `.dwarf` files, one for each built binary. This is not the native format used by macOS, and debuggers like LLDB can't automatically find them. The native format used by macOS is `.dSYM` bundles. To build `.dSYM` bundles and get a better inner-loop developer experience on macOS (e.g., have the LLDB debugger automatically find program symbols and display source code lines, etc.), build as follows: - -```bash -./build.sh --subset clr --cmakeargs "-DCLR_CMAKE_APPLE_DSYM=TRUE" -``` - -(Note: converting the entire build process to build and package `.dSYM` bundles on macOS by default is tracked by [this](https://github.com/dotnet/runtime/issues/92911) issue.) - -## Native Sanitizers - -CoreCLR can be built with native sanitizers like AddressSanitizer to help catch memory safety issues. To build the project with native sanitizers, add the `-fsanitize address` argument to the build script like the following: - -```bash -build.sh -s clr -fsanitize address -``` - -When building the repo with any native sanitizers, you should build all native components in the repo with the same set of sanitizers. - -The following sanitizers are supported for CoreCLR on macOS: - -| Sanitizer Name | `-fsanitize` argument | Support Status | -|-----------------|-----------------------|----------------| -| AddressSanitize | `address` | regularly tested on x64 | diff --git a/docs/workflow/building/coreclr/windows-instructions.md b/docs/workflow/building/coreclr/windows-instructions.md deleted file mode 100644 index 3ab6b33bc0474..0000000000000 --- a/docs/workflow/building/coreclr/windows-instructions.md +++ /dev/null @@ -1,67 +0,0 @@ -# Build CoreCLR on Windows - -* [Environment](#environment) -* [Build the Runtime](#build-the-runtime) - * [Cross-Compilation](#cross-compilation) -* [Core_Root](#core_root) -* [Native ARM64 (Experimental)](#native-arm64-experimental) - -This guide will walk you through building CoreCLR on Windows. - -## Environment - -Ensure you have all of the prerequisites installed from the [Windows Requirements](/docs/workflow/requirements/windows-requirements.md). - -## Build the Runtime - -To build CoreCLR on Windows, run `build.cmd` while specifying the `clr` subset: - -```cmd -.\build.cmd -subset clr -``` - -After the build has completed, there should be some files placed in `artifacts/bin/coreclr/windows..` (for example `artifacts/bin/coreclr/windows.x64.Release`). The most important binaries are the following: - -* `corerun.exe`: The command line host. This program loads and starts the CoreCLR runtime and passes the managed program (e.g. `program.dll`) you want to run with it. -* `coreclr.dll`: The CoreCLR runtime itself. -* `System.Private.CoreLib.dll`: The core managed library, containing definitions of `Object` and base functionality. - -### Cross-Compilation - -It is possible to get Windows x86, ARM32, and ARM64 builds using an x64 machine. Instructions on how to do this are in the [cross-building doc](/docs/workflow/building/coreclr/cross-building.md#windows-cross-building). - -## Core_Root - -The Core_Root provides one of the main ways to test your build. Full instructions on how to build it in the [CoreCLR testing doc](/docs/workflow/testing/coreclr/testing.md), and we also have a detailed guide on how to use it for your own testing in [its own dedicated doc](/docs/workflow/testing/using-corerun-and-coreroot.md). - -## Native ARM64 (Experimental) - -Building natively on ARM64 requires you to have installed the appropriate ARM64 build tools and Windows SDK, as specified in the [Windows requirements doc](/docs/workflow/requirements/windows-requirements.md). - -Once those requirements are satisfied, you have to specify you are doing an Arm64 build, and explicitly tell the build script you want to use `MSBuild`. `Ninja` is not yet supported on Arm64 platforms. - -```cmd -build.cmd -s clr -c Release -arch arm64 -msbuild -``` - -Since this is still in an experimental phase, the recommended way for building ARM64 is cross-compiling from an x64 machine. Instructions on how to do this can be found at the [cross-building doc](/docs/workflow/building/coreclr/cross-building.md#cross-compiling-for-arm32-and-arm64). - -## Native Sanitizers - -CoreCLR can be built with native sanitizers like AddressSanitizer to help catch memory safety issues. To build the project with native sanitizers, add the `-fsanitize address` argument to the build script like the following: - -```cmd -build.cmd -s clr -fsanitize address -``` - -When building the repo with any native sanitizers, you should build all native components in the repo with the same set of sanitizers. - -The following sanitizers are supported for CoreCLR on Windows: - -| Sanitizer Name | Minimum VS Version | `-fsanitize` argument | Support Status | -|----------------|--------------------|-----------------------|----------------| -| AddressSanitizer | not yet released | `address` | experimental | - -## Using a custom compiler environment - -If you ever need to use a custom compiler environment for the native builds on Windows, you can set the `SkipVCEnvInit` environment variable to `1`. The build system will skip discovering Visual Studio and initializing its development environment when this flag is used. This is only required for very advanced scenarios and should be used rarely. diff --git a/docs/workflow/requirements/linux-requirements.md b/docs/workflow/requirements/linux-requirements.md index 2aa1794ef1ac8..9eb52936d2641 100644 --- a/docs/workflow/requirements/linux-requirements.md +++ b/docs/workflow/requirements/linux-requirements.md @@ -1,111 +1,117 @@ -# Requirements to build dotnet/runtime on Linux +# Requirements to Set Up the Build Environment on Linux -* [Docker](#docker) -* [Environment](#environment) - * [Debian-based / Ubuntu](#debian-based--ubuntu) - * [Additional Requirements for Cross-Building](#additional-requirements-for-cross-building) - * [Fedora](#fedora) - * [Gentoo](#gentoo) +- [Using your Linux Environment](#using-your-linux-environment) + - [Debian/Ubuntu](#debian/ubuntu) + - [CMake on Older Versions of Ubuntu and Debian](#cmake-on-older-versions-of-ubuntu-and-debian) + - [Clang for WASM](#clang-for-wasm) + - [Additional Tools for Cross Building](#additional-tools-for-cross-building) + - [Fedora](#fedora) + - [Gentoo](#gentoo) +- [Using Docker](#using-docker) -This guide will walk you through the requirements to build _dotnet/runtime_ on Linux. Before building there is environment setup that needs to happen to pull in all the dependencies required by the build. +There are two ways to build the runtime repo on *Linux*: Set up your environment in your Linux machine, or use the Docker images that are used in the official builds. This guide will cover both of these approaches. Using Docker allows you to leverage our existing images which already have an environment set up, while using your own environment grants you better flexibility on having other tools at hand you might need. -There are two suggested ways to go about doing this. You can use the Docker images used in the official builds, or you can set up the environment yourself. The documentation will go over both ways. Using Docker allows you to leverage our existing images which already have an environment set up, while using your own environment grants you better flexibility on having other tools at hand you might need. +**NOTE:** If you're using WSL, then follow the instructions for the distro you have installed there. -## Docker +## Using your Linux Environment -Install Docker. For further installation instructions, see [here](https://docs.docker.com/install/). Details on the images used by the official builds can be found in the [Linux building instructions doc](/docs/workflow/building/coreclr/linux-instructions.md#docker-images). All the required build tools are included in the Docker images used to do the build, so no additional setup is required. +The following sections describe the requirements for different kinds of Linux distros. Pull Requests are welcome to add documentation regarding environments and distros currently not described here. -## Environment +The minimum required RAM is 1GB (builds are known to fail on 512MB VM's (https://github.com/dotnet/runtime/issues/4069), although more is recommended, as the builds can take a long time otherwise. -Below are the requirements for toolchain setup, depending on your environment. Pull Requests are welcome to address other environments. - -Minimum RAM required to build is 1GB. The build is known to fail on 512 MB VMs ([dotnet/runtime#4069](https://github.com/dotnet/runtime/issues/4069)). - -You can use this helper script to install dependencies on some platforms: +To get started, you can use this helper script to install dependencies on some platforms, or you can install them yourself following the instructions in the next sections. If you opt to try this script, make sure to run it as `sudo` if you don't have root privileges: ```bash sudo eng/install-native-dependencies.sh -# or without 'sudo' if you are root ``` -### Debian-based / Ubuntu +Note that it is always a good idea to manually double check that all the dependencies were installed correctly if you opt to use the script. -These instructions are written assuming the current Ubuntu LTS. +### Debian/Ubuntu -Install the following packages for the toolchain: +These instructions are written assuming the current *Ubuntu LTS*. + +The packages you need to install are shown in the following list: -* CMake 3.20 or newer -* llvm -* lld -* clang (for WASM 16 or newer) -* build-essential -* python-is-python3 -* curl -* git -* lldb -* libicu-dev -* liblttng-ust-dev -* libssl-dev -* libkrb5-dev -* zlib1g-dev -* ninja-build (optional, enables building native code with ninja instead of make) - -**NOTE**: If you have an Ubuntu version older than 22.04 LTS, or Debian version older than 12, don't install `cmake` using `apt` directly. Follow the note written down below. +- `CMake` (version 3.20 or newer) +- `llvm` +- `lld` +- `Clang` (see the [Clang for WASM](#clang-for-wasm) section if you plan on doing work on *Web Assembly (Wasm)*) +- `build-essential` +- `python-is-python3` +- `curl` +- `git` +- `lldb` +- `libicu-dev` +- `liblttng-ust-dev` +- `libssl-dev` +- `libkrb5-dev` +- `ninja-build` (Optional. Enables building native code using `ninja` instead of `make`) + +**NOTE:** If you are running on *Ubuntu* older than version *22.04 LTS*, or *Debian* older than version 12, then don't install `cmake` using `apt` directly. Follow the instructions in the [CMake on Older Versions of Ubuntu and Debian section](#cmake-on-older-versions-of-ubuntu-and-debian) later down in this doc. ```bash sudo apt install -y cmake llvm lld clang build-essential \ python-is-python3 curl git lldb libicu-dev liblttng-ust-dev \ - libssl-dev libkrb5-dev zlib1g-dev ninja-build + libssl-dev libkrb5-dev ninja-build ``` -**NOTE**: As of now, Ubuntu's `apt` only has until CMake version 3.16.3 if you're using Ubuntu 20.04 LTS (less in older Ubuntu versions), and version 3.18.4 in Debian 11 (less in older Debian versions). This is lower than the required 3.20, which in turn makes it incompatible with the repo. For this case, we can use the `snap` package manager or the _Kitware APT feed_ to get a new enough version of CMake. -**NOTE**: If you have Ubuntu 22.04 LTS and older and your `apt` does not have clang version 16, you can add `"deb http://apt.llvm.org/$(lsb_release -s -c)/ llvm-toolchain-$(lsb_release -s -c)-18 main"` repository to your `apt`. See how we do it for linux-based containers [here](./../../../.devcontainer/Dockerfile). +#### CMake on Older Versions of Ubuntu and Debian + +As of now, Ubuntu's `apt` only has until *CMake* version 3.16.3 if you're using *Ubuntu 20.04 LTS* (less in older Ubuntu versions), and version 3.18.4 in *Debian 11* (less in older Debian versions). This is lower than the required 3.20, which in turn makes it incompatible with the runtime repo. To get around this, there are two options you can choose: Use the `snap` package manager, which has a more recent version of *CMake*, or directly use the *Kitware APT Feed*. -For snap: +To use `snap`, run the following command: ```bash sudo snap install cmake ``` -For the _Kitware APT feed_, follow its [instructions here](https://apt.kitware.com/). +To use the *Kitware APT feed*, follow their official instructions [in this link](https://apt.kitware.com/). -You now have all the required components. +#### Clang for WASM -#### Additional Requirements for Cross-Building +As of now, *WASM* builds have a minimum requirement of `Clang` version 16 or later (version 18 is the latest at the time of writing this doc). If you're using *Ubuntu 22.04 LTS* or older, then you will have to add an additional repository to `apt` to be able to get said version. Run the following commands on your terminal to do this: -If you are planning to use your Linux environment to do cross-building for other architectures (e.g. Arm32, Arm64) and/or other operating systems (e.g. Alpine, FreeBSD), you need to install these additional dependencies: +```bash +sudo add-apt-repository -y "deb http://apt.llvm.org/$(lsb_release -s -c)/ llvm-toolchain-$(lsb_release -s -c)-18 main" +sudo apt update -y +sudo apt install -y clang-18 +``` + +You can also take a look at the Linux-based *Dockerfile* [over here](/.devcontainer/Dockerfile) for another example. + +#### Additional Tools for Cross Building -* qemu -* qemu-user-static -* binfmt-support -* debootstrap +If you're planning to use your environment to do Linux cross-building to other architectures (e.g. Arm32, Arm64), and/or other operating systems (e.g. Alpine, FreeBSD), you'll need to install a few additional dependencies. It is worth mentioning these other packages are required to build the `crossrootfs`, which is used to effectively do the cross-compilation, not to build the runtime itself. -**NOTE**: These dependencies are used to build the `crossrootfs`, not the runtime itself. +- `qemu` +- `qemu-user-static` +- `binfmt-support` +- `debootstrap` ### Fedora -These instructions are written assuming Fedora 40. +These instructions are written assuming *Fedora 40*. Install the following packages for the toolchain: -* cmake -* llvm -* lld -* lldb -* clang -* python -* curl -* git -* libicu-devel -* openssl-devel -* krb5-devel -* zlib-devel -* lttng-ust-devel -* ninja-build (optional, enables building native code with ninja instead of make) +- `cmake` +- `llvm` +- `lld` +- `lldb` +- `clang` +- `python` +- `curl` +- `git` +- `libicu-devel` +- `openssl-devel` +- `krb5-devel` +- `lttng-ust-devel` +- `ninja-build` (Optional. Enables building native code using `ninja` instead of `make`) ```bash -sudo dnf install -y cmake llvm lld lldb clang python curl git libicu-devel openssl-devel \ - krb5-devel zlib-devel lttng-ust-devel ninja-build +sudo dnf install -y cmake llvm lld lldb clang python curl git \ + libicu-devel openssl-devel krb5-devel lttng-ust-devel ninja-build ``` ### Gentoo @@ -115,3 +121,9 @@ In case you have Gentoo you can run following command: ```bash emerge --ask clang dev-util/lttng-ust app-crypt/mit-krb5 ``` + +## Using Docker + +As mentioned at the beginning of this doc, the other method to build the runtime repo for Linux is to use the prebuilt Docker images that our official builds use. In order to be able to run them, you first need to download and install the Docker Engine. The binaries needed and installation instructions can be found at the Docker official site [in this link](https://docs.docker.com/get-started/get-docker). + +Once you have the Docker Engine up and running, you can follow our docker building instructions [over here](/docs/workflow/using-docker.md). diff --git a/docs/workflow/requirements/macos-requirements.md b/docs/workflow/requirements/macos-requirements.md index 0eae7f1d621ac..e9606b12569b8 100644 --- a/docs/workflow/requirements/macos-requirements.md +++ b/docs/workflow/requirements/macos-requirements.md @@ -1,36 +1,29 @@ -# Requirements to build dotnet/runtime on macOS +# Requirements to Set Up the Build Environment on macOS -* [Environment](#environment) - * [Xcode](#xcode) - * [Toolchain Setup](#toolchain-setup) +- [Xcode Developer Tools](#xcode-developer-tools) +- [Toolchain Additional Dependencies](#toolchain-additional-dependencies) -This guide will walk you through the requirements needed to build _dotnet/runtime_ on macOS. We'll start by showing how to set up your environment from scratch. +To build the runtime repo on *macOS*, you will need to install the *Xcode* developer tools and a few other dependencies, described in the sections below. -## Environment +## Xcode Developer Tools -Here are the components you will need to install and setup to work with the repo. +- Install *Apple Xcode* developer tools from the [Mac App Store](https://apps.apple.com/app/xcode/id497799835). +- Configure the *Xcode* command line tools. You can do this via one of these two methods: + - Run Xcode, open Preferences, and on the Locations tab, change `Command Line Tools` to point to this installation of _Xcode.app_. This usually comes already done by default, but it's always good to ensure. + - Alternately, you can run `sudo xcode-select --switch /Applications/Xcode.app/Contents/Developer` in a terminal. This command assumes your Xcode app is named `Xcode.app` as it comes by default. If you've renamed it to something else, adjust the path accordingly, then run the command. -### Xcode +## Toolchain Additional Dependencies -* Install Apple Xcode developer tools from the [Mac App Store](https://apps.apple.com/us/app/xcode/id497799835). -* Configure the Xcode command line tools: - * Run Xcode, open Preferences, and on the Locations tab, change "Command Line Tools" to point to this installation of _Xcode.app_. This usually comes already done by default, but it's always good to ensure. - * Alternately, you can run `sudo xcode-select --switch /Applications/Xcode.app/Contents/Developer` in a terminal (Adjust the path if you renamed _Xcode.app_). +To build the runtime repo, you will also need to install the following dependencies: -### Toolchain Setup +- `CMake` 3.20 or newer +- `icu4c` +- `openssl@1.1` or `openssl@3` +- `pkg-config` +- `python3` +- `ninja` (This one is optional. It is an alternative tool to `make` for building native code) -Building _dotnet/runtime_ depends on several tools to be installed. You can download them individually or use [Homebrew](https://brew.sh) for easier toolchain setup. - -Install the following packages: - -* CMake 3.20 or newer -* icu4c -* openssl@1.1 or openssl@3 -* pkg-config -* python3 -* ninja (optional, enables building native code with ninja instead of make) - -You can install all the required packages above using _Homebrew_ by running this command in the repository root: +You can install them separately, or you can alternatively opt to install *[Homebrew](https://brew.sh/)* and use the `Brewfile` provided by the repo, which takes care of everything for you. If you go by this route, once you have *Homebrew* up and running on your machine, run the following command from the root of the repo to download and install all the necessary dependencies at once: ```bash brew bundle --no-lock --file eng/Brewfile diff --git a/docs/workflow/requirements/windows-requirements.md b/docs/workflow/requirements/windows-requirements.md index ff875e7b9c5fd..26ba9aa187e53 100644 --- a/docs/workflow/requirements/windows-requirements.md +++ b/docs/workflow/requirements/windows-requirements.md @@ -1,140 +1,109 @@ -# Requirements to build dotnet/runtime on Windows +# Requirements to Set Up the Build Environment on Windows -* [Environment](#environment) - * [Enable Long Paths](#enable-long-paths) - * [Visual Studio](#visual-studio) - * [Build Tools](#build-tools) - * [CMake](#cmake) - * [Ninja](#ninja) - * [Python](#python) - * [Git](#git) - * [PowerShell](#powershell) - * [.NET SDK](#net-sdk) - * [Adding to the default PATH variable](#adding-to-the-default-path-variable) +- [Tools and Configuration](#tools-and-configuration) + - [Git for Windows](#git-for-windows) + - [Enable Long Paths](#enable-long-paths) + - [Visual Studio](#visual-studio) + - [Workloads](#workloads) + - [Individual Development Tools](#individual-development-tools) + - [Powershell](#powershell) + - [The .NET SDK](#the-net-sdk) +- [Setting Environment Variables on Windows](#setting-environment-variables-on-windows) -These instructions will lead you through the requirements to build _dotnet/runtime_ on Windows. +To build the runtime repo on *Windows*, you will need to install *Visual Studio*, as well as certain development tools that go with it, independently of the IDE, which are described in the following sections. -## Environment +## Tools and Configuration -Here are the components you will need to install and setup to work with the repo. +### Git for Windows -### Enable Long Paths +- First of all, download and install [Git for Windows](https://git-scm.com/download/win) (minimum required version is 2.22.0). +- The installer by default should add `Git` to your `PATH` environment variable, or at least have a checkbox where you can instruct it to do so. If it doesn't, or you'd prefer to set it later yourself, you can follow the instructions in the [Setting Environment Variables on Windows](#setting-environment-variables-on-windows) section of this doc. -The runtime repository requires long paths to be enabled. Follow [the instructions provided here](https://learn.microsoft.com/windows/win32/fileio/maximum-file-path-limitation#enable-long-paths-in-windows-10-version-1607-and-later) to enable that feature. +### Enable Long Paths -If using Git for Windows you might need to also configure long paths there. Using an administrator terminal simply type: +The runtime repo requires long paths to be enabled both, on Windows itself and on *Git*. To configure them on *Git*, open a terminal with administrator privileges and enter the following command: -```cmd +```powershell git config --system core.longpaths true ``` -### Visual Studio - -Install [Visual Studio 2022](https://visualstudio.microsoft.com/downloads/). The Community edition is available free of charge. Visual Studio 2022 17.8 or later is required. Note that as we ramp up on a given release the libraries code may start using preview language features. While an older IDE may still succeed in building the projects, the IDE may report mismatched diagnostics in the Errors and Warnings window. Using the latest public preview of Visual Studio is required to ensure the IDE experience is well behaved in such scenarios. +The reason this has to be done is that *Git for Windows* is compiled with **MSYS**, which uses a version of the Windows API that has a filepath limit of 260 characters total, as opposed to the usual limit of 4096 on macOS and Linux. -Note that Visual Studio and the development tools described below are required, regardless of whether you plan to use the IDE or not. The installation process goes as follows: +Next, to configure the long paths for Windows itself, follow the instructions provided [in this link](https://learn.microsoft.com/windows/win32/fileio/maximum-file-path-limitation?tabs=registry#enable-long-paths-in-windows-10-version-1607-and-later). -* It's recommended to use **Workloads** installation approach. The following are the minimum requirements: - * **.NET Desktop Development** with all default components, - * **Desktop Development with C++** with all default components. -* To build for Arm64, make sure that you have the right architecture-specific compilers installed. In the **Individual components** window, in the **Compilers, build tools, and runtimes** section: - * For Arm64, check the box for _MSVC v143* VS 2022 C++ ARM64 build tools (Latest)_. -* To build the tests, you will need some additional components: - * **C++/CLI support for v143 build tools (Latest)**. +If long paths are not enabled, you might start running into issues since trying to clone the repo. Especially with libraries that have very long filenames, you might get errors like `Unable to create file: Filename too long` during the cloning process. -A `.vsconfig` file is included in the root of the _dotnet/runtime_ repository that includes all components needed to build the _dotnet/runtime_ repository. You can [import `.vsconfig` in your Visual Studio installer](https://learn.microsoft.com/visualstudio/install/import-export-installation-configurations?view=vs-2022#import-a-configuration) to install all necessary components. +### Visual Studio -### Build Tools +Download and install the [latest version of Visual Studio](https://visualstudio.microsoft.com/downloads/) (minimum version required is VS 2022 17.8). The **Community Edition** is available free of charge. Note that as we ramp up on a given release, the libraries code may start using preview language features. While older versions of the IDE may still succeed in building the projects, the IDE may report mismatched diagnostics in the *Errors and Warnings* window. Using the latest public preview of Visual Studio fixes these cases and helps ensure the IDE experience is well behaved and displays what we would expect it to properly. -These steps are required only in case the tools have not been installed as Visual Studio **Individual Components** (described above). +Note that Visual Studio and its development tools are required, regardless of whether you plan to use the IDE or not. -#### CMake +#### Workloads -* Install [CMake](https://cmake.org/download) for Windows. -* Add its location (e.g. C:\Program Files (x86)\CMake\bin) to the PATH environment variable. The installation script has a check box to do this, but you can do it yourself after the fact following the instructions at [Adding to the Default PATH variable](#adding-to-the-default-path-variable). +It is highly recommended to use the *Workloads* approach, as that installs the full bundles, which include all the necessary tools for the repo to work properly. Open up *Visual Studio Installer*, and click on *Modify* on the Visual Studio installation you plan to use. There, click on the *Workloads* tab (usually selected by default), and install the following bundles: -The _dotnet/runtime_ repository requires using CMake 3.20 or newer. +- .NET desktop development +- Desktop development with C++ -**NOTE**: If you plan on using the `-msbuild` flag for building the repo, you will need version 3.21 at least. This is because the VS2022 generator doesn't exist in CMake until said version. +To build the tests and do ARM32/ARM64 development, you'll need some additional components. You can find them by clicking on the *Individual components* tab in the *Visual Studio Installer*: -#### Ninja +- For ARM stuff: *MSVC v143 - VS 2022 C++ ARM64/ARM64EC build tools (Latest)* for Arm64, and *MSVC v143 - VS 2022 C++ ARM build tools (Latest)* for Arm32. +- For building tests: *C++/CLI support for v143 build tools (Latest)* -* Install Ninja in one of the three following ways - * Ninja is included with Visual Studio. ARM64 Windows should use this method as other options are currently not available for ARM64. - * [Download the executable](https://github.com/ninja-build/ninja/releases) and add its location to [the Default PATH variable](#adding-to-the-default-path-variable). - * [Install via a package manager](https://github.com/ninja-build/ninja/wiki/Pre-built-Ninja-packages), which should automatically add it to the PATH environment variable. +Alternatively, there is also a `.vsconfig` file included at the root of the runtime repo. It includes all the necessary components required, outlined in a JSON format that Visual Studio can read and parse. You can boot up Visual Studio directly and [import this `.vsconfig` file](https://learn.microsoft.com/visualstudio/install/import-export-installation-configurations?view=vs-2022#import-a-configuration) instead of installing the workloads yourself. It is worth mentioning however, that while we are very careful in maintaining this file up-to-date, sometimes it might get a tad obsolete and miss important components. So, it is always a good idea to double check that the full workloads are installed. -#### Python +#### Individual Development Tools -* Install [Python](https://www.python.org/downloads/) for Windows. -* Add its location (e.g. C:\Python*\\) to the PATH environment variable. - The installation script has a check box to do this, but you can do it yourself after the fact following the instructions at [Adding to the Default PATH variable](#adding-to-the-default-path-variable). +All the tools you need should've been installed by Visual Studio at this point. Some of those tools, however, may not have been installed or you might prefer installing them yourself from their own sources. The main process for this is to download their installers and follow their setup. Said installers usually also prompt you to add them automatically to your `PATH` environment variable. If you miss this option, or prefer to set them yourself later on, you can follow the instructions in the [Setting Environment Variables on Windows](#setting-environment-variables-on-windows) section of this doc. -The _dotnet/runtime_ repository requires at least Python 3.7.4. +Here are the links where you can download these tools: -### Git +- *CMake*: https://cmake.org/download (minimum required version is 3.20) +- *Ninja*: https://github.com/ninja-build/ninja/releases (latest version is most recommended) +- *Python*: https://www.python.org/downloads/windows (minimum required version is 3.7.4) -* Install [Git](https://git-for-windows.github.io/) for Windows. -* Add its location (e.g. C:\Program Files\Git\cmd) to the PATH environment variable. - The installation script has a check box to do this, but you can do it yourself after the fact following the instructions at [Adding to the Default PATH variable](#adding-to-the-default-path-variable). +**NOTE:** If you plan on using *MSBuild* instead of *Ninja* to build the native components, then the minimum required CMake version is 3.21 instead. This is because the VS2022 generator doesn't exist in CMake until said version. -The _dotnet/runtime_ repository requires at least Git 2.22.0. +### Powershell -### PowerShell +The runtime repo also uses some `powershell` scripts as part of the Windows builds, so ensure it is accessible via your `PATH` environment variable. It is located in `%SYSTEMROOT%\System32\WindowsPowerShell\v1.0` and should be all set since you first installed Windows, but it never hurts to double check. -* Ensure that `powershell.exe` is accessible via the PATH environment variable. Typically this is `%SYSTEMROOT%\System32\WindowsPowerShell\v1.0\` and its automatically set upon Windows installation. -* Powershell version must be 3.0 or higher. Use `$PSVersionTable.PSVersion` to determine the engine version. + +The minimum required version is 3.0, and your Windows installation should have it. You can verify this by checking the `$PSVersionTable.PSVersion` variable in a Powershell terminal. -### .NET SDK +### The .NET SDK -While not strictly needed to build or test this repository, having the .NET SDK installed lets you browse solution files in this repository with Visual Studio and use the `dotnet.exe` command to run .NET applications in the 'normal' way. +While not strictly needed to build or test this repository, having the .NET SDK installed lets you browse solution files in the codebase with Visual Studio and use the `dotnet.exe` command to build and run .NET applications in the 'normal' way. -We use this in the [build testing with the installed SDK](/docs/workflow/testing/using-your-build-with-installed-sdk.md), and [build testing with dev shipping packages](/docs/workflow/testing/using-dev-shipping-packages.md) instructions. The minimum required version of the SDK is specified in the [global.json file](https://github.com/dotnet/runtime/blob/main/global.json#L3). You can find the installers and binaries for latest development builds of .NET SDK in the [sdk repo](https://github.com/dotnet/sdk#installing-the-sdk). +We use this in the [build testing with the installed SDK](/docs/workflow/testing/using-your-build-with-installed-sdk.md), and [build testing with dev shipping packages](/docs/workflow/testing/using-dev-shipping-packages.md) instructions. The minimum required version of the SDK is specified in the [global.json file](https://github.com/dotnet/runtime/blob/main/global.json#L3). You can find the nightly installers and binaries for the latest development builds over in the [SDK repo](https://github.com/dotnet/sdk#installing-the-sdk). -Alternatively, to avoid modifying your machine state, you can use the repository's locally acquired SDK by passing in the solution to load via the `-vs` switch. For example: +Alternatively, if you would rather avoid modifying your machine state, you can use the repository's locally acquired SDK by passing in the solution to load via the `-vs` switch. For example: ```cmd .\build.cmd -vs System.Text.RegularExpressions ``` -This will set the `DOTNET_ROOT` and `PATH` environment variables to point to the locally acquired SDK under `runtime\.dotnet` and will launch the Visual Studio instance that is registered for the `sln` extension. +This will set the `DOTNET_ROOT` and `PATH` environment variables to point to the locally acquired SDK under the `.dotnet` directory found at the root of the repo for the duration of this terminal session. Then, it will launch the Visual Studio instance that is registered for the `.sln` extension, and open the solution you passed as argument to the command-line. -### Adding to the default PATH variable +## Setting Environment Variables on Windows -The commands above need to be on your command lookup path. Some installers will automatically add them to the path as part of the installation, but if not, here is how you can do it. +As mentioned in the sections above, the commands that run the development tools have to be in your `PATH` environment variable. Their installers usually have the option to do it automatically for you enabled by the default, but if for any reason you need to set them yourself, here is how you can do it. There are two options. You can make them last only for that terminal instance, or you can set them directly to the system to make them permanent. -You can also temporarily add a directory to the PATH environment variable with the command-prompt syntax `set PATH=%PATH%;DIRECTORY_TO_ADD_TO_PATH`. If you're working with Powershell, then the syntax would be `$Env:PATH += ";DIRECTORY_TO_ADD_TO_PATH"`. However, this change will only last until the command windows close. +**Temporary for the Duration of the Terminal Session** -You can make your change to the PATH variable persistent by going to _Control Panel -> System And Security -> System -> Advanced system settings -> Environment Variables_, and select the `Path` variable under `System Variables` (if you want to change it for all users) or `User Variables` (if you only want to change it for the current user). +If you're on *Command Prompt*, issue the following command: -Simply edit the PATH variable's value and add the directory (with a semicolon separator). - -### Windows on Arm64 - -The Windows on Arm64 development experience has improved substantially over the last few years, however there are still a few steps you should take to improve performance when developing dotnet/runtime on an ARM device. - -During preview releases, the repo sources its compilers from the [Microsoft.NET.Compilers.Toolset](https://www.nuget.org/packages/Microsoft.Net.Compilers.Toolset/) package whose bits aren't configured for the ARM64 build of .NET framework. This can result in [suboptimal performance](https://github.com/dotnet/runtime/issues/104548) when working on libraries in Visual Studio. The issue can be worked around by [configuring the registry](https://github.com/dotnet/runtime/issues/104548#issuecomment-2214581797) to run the compiler as Arm64 processes. The proper fix that will make this workaround unnecessary is being worked on in [this PR](https://github.com/dotnet/roslyn/pull/74285). +```cmd +set PATH=%PATH%; +``` -Using an Administrator Powershell prompt run the script: +If you're on *Powershell*, then the command looks like this: ```powershell -function SetPreferredMachineToArm64($imageName) -{ - $RegistryPath = "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\${imageName}" - $Name = "PreferredMachine" - $Value = [convert]::ToInt32("aa64", 16) - - # Create the key if it does not exist - If (-NOT (Test-Path $RegistryPath)) { - New-Item -Path $RegistryPath -Force | Out-Null - } - - # Now set the value - New-ItemProperty -Path $RegistryPath -Name $Name -Value $Value -PropertyType DWORD -Force -} - -SetPreferredMachineToArm64('csc.exe') -SetPreferredMachineToArm64('VBCSCompiler.exe') +$Env:PATH += ";" ``` -Then restart any open Visual Studio applications. +**Permanently on the System** + +To make your environment variables changes persistent, open *Control Panel*. There, click on *System and Security* -> *System* -> *Advanced System Settings* -> *Environment Variables*. Then, there you'll notice there are two `PATH` environment variables: One under `User Variables`, and one under `System Variables`. If you want to make the changes persistent only for your current user, then edit the former one, and if you want them to spread across all accounts in that machine, then edit the latter one. diff --git a/docs/workflow/using-docker.md b/docs/workflow/using-docker.md new file mode 100644 index 0000000000000..6f3ca0912d4a7 --- /dev/null +++ b/docs/workflow/using-docker.md @@ -0,0 +1,79 @@ +# Using Docker for your Workflow + +- [Docker Basics](#docker-basics) +- [The Official Runtime Docker Images](#the-official-runtime-docker-images) +- [Build the Repo](#build-the-repo) + +This doc will cover the usage of Docker images and containers for your builds. + +## Docker Basics + +First, you have to enable and install the Docker Engine. Follow the instructions in their official site in [this link](https://docs.docker.com/get-started/get-docker) if you haven't done so. + +When using Docker, your machine's OS is strictly speaking not terribly important. For example, if you are on *Ubuntu 22.04*, you can use the *Ubuntu 18.04* image without any issues whatsoever. Likewise, you can run Linux images on Windows if you have WSL enabled. If you followed the instructions from the Docker official website when installing the engine, you most likely have it already up and running. If not, you can follow the instructions in [this link](https://learn.microsoft.com/windows/wsl/install) to enable it. However, note that you can't run multiple OS's on the same *Docker Daemon*, as it takes resources from the underlying kernel as needed. In other words, you can run either Linux on WSL, or Windows containers. You have to switch between them if you need both, and restart Docker. + +The target architecture is more important to consider when using Docker containers. The image's architecture has to match your machine's supported platforms. For instance, you can run both, x64 and Arm64 images on an *Apple Silicon Mac*, thanks to the *Rosetta* x64 emulator it provides. Likewise, you can run Linux Arm32 images on a Linux Arm64 host. + +Note that while Docker uses WSL to run the Linux containers on Windows, you don't have to boot up a WSL terminal to run them. Any `cmd` or `powershell` terminal with the `docker` command available will suffice to run all the commands. Docker takes care of the rest. + +## The Official Runtime Docker Images + +In the following tables, you will find the full names with tags of the images used for the official builds. + +**Main Docker Images** + +The main Docker images are the most commonly used ones, and the ones you will probably need for your builds. If you are working with more specific scenarios (e.g. Android, Risc-V), then you will find the images you need in the *Extended Docker Images* table right below this one. + +| Host OS | Target OS | Target Arch | Image | crossrootfs dir | +| ----------------- | ------------ | --------------- | -------------------------------------------------------------------------------------- | -------------------- | +| Azure Linux (x64) | Alpine 3.13 | x64 | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-amd64-alpine` | `/crossrootfs/x64` | +| Azure Linux (x64) | Ubuntu 16.04 | x64 | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-amd64` | `/crossrootfs/x64` | +| Azure Linux (x64) | Alpine 3.13 | Arm32 (armhf) | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-arm-alpine` | `/crossrootfs/arm` | +| Azure Linux (x64) | Ubuntu 22.04 | Arm32 (armhf) | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-arm` | `/crossrootfs/arm` | +| Azure Linux (x64) | Alpine 3.13 | Arm64 (arm64v8) | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-arm64-alpine` | `/crossrootfs/arm64` | +| Azure Linux (x64) | Ubuntu 16.04 | Arm64 (arm64v8) | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-arm64` | `/crossrootfs/arm64` | +| Azure Linux (x64) | Ubuntu 16.04 | x86 | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-x86` | `/crossrootfs/x86` | + +**Extended Docker Images** + +| Host OS | Target OS | Target Arch | Image | crossrootfs dir | +| ----------------- | -------------------------- | ------------- | --------------------------------------------------------------------------------------- | ---------------------- | +| Azure Linux (x64) | Android Bionic | x64 | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-android-amd64` | *N/A* | +| Azure Linux (x64) | Android Bionic (w/OpenSSL) | x64 | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-android-openssl` | *N/A* | +| Azure Linux (x64) | Android Bionic (w/Docker) | x64 | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-android-docker` | *N/A* | +| Azure Linux (x64) | Azure Linux 3.0 | x64 | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-fpm` | *N/A* | +| Azure Linux (x64) | FreeBSD 13 | x64 | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-freebsd-13` | `/crossrootfs/x64` | +| Azure Linux (x64) | Ubuntu 18.04 | PPC64le | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-ppc64le` | `/crossrootfs/ppc64le` | +| Azure Linux (x64) | Ubuntu 24.04 | RISC-V | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-riscv64` | `/crossrootfs/riscv64` | +| Azure Linux (x64) | Ubuntu 18.04 | S390x | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-s390x` | `/crossrootfs/s390x` | +| Azure Linux (x64) | Ubuntu 16.04 (Wasm) | x64 | `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-webassembly-amd64` | `/crossrootfs/x64` | +| Debian (x64) | Debian 12 | x64 | `mcr.microsoft.com/dotnet-buildtools/prereqs:debian-12-gcc14-amd64` | *N/A* | +| Ubuntu (x64)* | Ubuntu 22.04 | x64 | `mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-22.04-debpkg` | *N/A* | +| Ubuntu (x64) | Tizen 9.0 | Arm32 (armel) | `mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-22.04-cross-armel-tizen` | `/crossrootfs/armel` | +| Ubuntu (x64) | Ubuntu 20.04 | Arm32 (v6) | `mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-20.04-cross-armv6-raspbian-10` | `/crossrootfs/armv6` | + +**NOTE:** The Ubuntu image marked with an * in the table above is only used for producing *deb* packages, but not for building any product code. + +## Build the Repo + +Once you've chosen the image that suits your needs, you can issue `docker run` with the necessary arguments to use your clone of the runtime repo, and call the build scripts as you need. Down below, we have a small command-line example, explaining each of the flags you might need to use: + +```bash +docker run --rm \ + -v :/runtime \ + -w /runtime \ + mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-amd64 \ + ./build.sh --subset clr --configuration Checked +``` + +Now, dissecting the command: + +- `--rm`: Erase the created container after it finishes running. +- `-v :/runtime`: Mount the runtime repo clone located in `` to the container path `/runtime`. +- `-w /runtime`: Start the container in the `/runtime` directory. +- `mcr.microsoft.com/dotnet-buildtools/prereqs:azurelinux-3.0-net9.0-cross-amd64`: The fully qualified name of the Docker image to download. In this case, we want to use an *Azure Linux* image to target the *x64* architecture. +- `./build.sh --subset clr --configuration Checked`: The build command to run in the repo. In this case, we want to build the *Clr* subset in the *Checked* configuration. + +You might also want to interact with the container directly for a myriad of reasons, like running multiple builds in different paths for example. In this case, instead of passing the build script command to the `docker` command-line, pass the flag `-it`. When you do this, you will get access to a small shell within the container, which allows you to explore it, run builds manually, and so on, like you would on a regular terminal in your machine. Note that the containers' shell's built-in tools are very limited in comparison to the ones you probably have on your machine, so don't expect to be able to do full work there. + +To do cross-building using Docker, make sure to select the appropriate image that targets the platform you want to build for. As for the commands to run, follow the same instructions from the cross-building doc [over here](/docs/workflow/building/coreclr/cross-building.md), with the difference that you don't need to generate the *ROOTFS*, as the cross-building images already include it.