Wherever you execute yarn install from the scope of the project built on top of workspaces, all the dependencies for all of the workspaces will be installed. Why? Focused installations with Yarn workspacesįirst things first. I am not that mean, but it still sounds serious. You are not entirely wrong, but if you decide to stick to classic yarn install after navigating to a proper directory containing the project the pipeline is focused on, without complex caching the workflow will quickly become extremely inefficient. If you also think that except this small addition we should be fine with leaving the rest of the build steps unchanged. Sure, from now onwards, both build and deployment pipelines will be linked to a single repository so if you think that all the build scripts will need to be modified to make sure we are executing them from a specific subdirectory ( /packages/), you are right. That is totally intuitive and reasonable - we need to somehow fetch the external resources (listed in package.json and associated with specific versions described in yarn.lock) that the project depends on. As you can probably imagine, there must be yarn install command executed somewhere within this process. Nothing that falls under the black magic term, really. Then, the build stage kicks off - Docker images stored in each repo are used to set up the build environment and execute specific scripts to produce the final bundle and make sure that the output is put on a proper server end users have access to. Multiple "release" pipelines are now linked to a separate GitHub repository each and triggered by webhooks on successful merges to master. Our existing approach to continuous deployment cannot be simpler. While the common and already well-described scenarios for the monorepo end with accessible packages publishing (via lerna publish), most of our subprojects are not meant to fit into package definition they should be rather characterized as separate web applications.
0 Comments
Leave a Reply. |