Skip to content
Snippets Groups Projects

WIP: Create pre-configured image for use in CI

Open Daniel Joseph Antrim requested to merge dantrim_update_ci_image into devel

What

This MR changes the container used in the CI from centos7 to labremote-base, whose build configuration is specified in this MR's changes, tentatively named Dockerfile_base since I didn't want to yet clobber the existing Dockerfile (not knowing what it's purpose is).

The labremote-base image contains all of the "boilerplate" install that we currently do in the CI, the goal being to speed up the CI process and to remove the clutter from the CI logs. The CI should be ready to to install/build labRemote from the start, rather than build many standard and system packages.

ToDo

I want to understand the complete requirements of this image. For example, right now it does not include some of the additional libraries like mpsse. It may also be good to understand in what other circumstances that this labRemote-ready image may be used.

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
  • added CI label

  • @gstark since you created and are using the existing Dockerfile, I am wondering if you have any input on what is likely good to additionally add. Or how to configure it so it is more useful in other environments other than CI.

  • Hrmm, this looks fine. I'd suggest using a similar set up to what I use in mario-mapyde:

    # Push image to latest in docker-registry
    # If tagged, CI_COMMIT_REF_SLUG is the tag name with dashes
    # IMAGE_NAME comes from the CI_JOB_NAME variable (build_xyz -> IMAGE_NAME=xyz)
    .dockerfile_changed:
      only:
        refs:
          - master
        changes:
          - Dockerfiles/*
    
    generate_build_images:
      extends: .dockerfile_changed
      stage: prebuild
      image: python:3.7-alpine
      before_script:
        - apk add git
      script:
        - python ci/generateYaml_dockerBuild.py $(git diff-tree --no-commit-id --name-only -r $CI_COMMIT_SHA --diff-filter=ACMR Dockerfiles/ | tr '\n' ' ')
                                                --output-file .build_images.yml
        - cat .build_images.yml
      artifacts:
        paths:
          - .build_images.yml
    
    build_docker_images:
      extends: .dockerfile_changed
      stage: build
      trigger:
        include:
          - artifact: .build_images.yml
            job: generate_build_images
        strategy: depend # wait for child jobs to finish

    with the script for building the yaml jobs down here

    #!/usr/bin/env python3
    import argparse
    import pathlib
    import json
    
    
    def make_job(dockerfile_path):
        job_name = dockerfile_path.stem
        print(f"Building config for build_{job_name}")
        return {
            f"build_{job_name}": {
                "image": {
                    "name": "gitlab-registry.cern.ch/ci-tools/docker-image-builder",
                    "entrypoint": [""],
                },
                "script": [
                    'echo "{\\"auths\\":{\\"$CI_REGISTRY\\":{\\"username\\":\\"$CI_REGISTRY_USER\\",\\"password\\":\\"$CI_REGISTRY_PASSWORD\\"}}}" > /kaniko/.docker/config.json',
                    f'/kaniko/executor --context "${{CI_PROJECT_DIR}}" --dockerfile "{dockerfile_path.resolve()}" --destination "${{CI_REGISTRY_IMAGE}}/{job_name}:${{CI_COMMIT_REF_SLUG}}"',
                ],
            }
        }
    
    
    if __name__ == "__main__":
        parser = argparse.ArgumentParser(description="Build CI configs for dockerfiles.")
        parser.add_argument(
            "files",
            metavar="f",
            type=pathlib.Path,
            nargs="+",
            help="paths for dockerfiles to build",
        )
        parser.add_argument("--output-file", default=None, help="Output file to write to")
        args = parser.parse_args()
    
        config = {}
        for f in args.files:
            config.update(make_job(f))
    
        if args.output_file is None:
            print(json.dumps(config, indent=4, sort_keys=True))
        else:
            with open(args.output_file, "w+") as out_file:
                json.dump(config, out_file, indent=4, sort_keys=True)

    So essentially, the idea is to build an arbitrary set of dockerfiles for a particular branch -- determine which dockerfiles have actually changed (if any), and do something about it. I find this workflow a lot more manageable in that it only optionally builds docker images first before anything -- if they changed -- otherwise, just pull the latest from registry.

    For additional requirements, this might also be a potential place where we can use multi-stage builds (e.g. pinging Feickert for his input on this case might be beneficial). It allows you to add on new things on top of it with a single build (smaller sizes) but with various different libs or binaries installed/compiled.

  • mentioned in issue #85

  • mentioned in issue #89 (closed)

Please register or sign in to reply
Loading