Skip to content
Jeff Squyres edited this page Aug 17, 2016 · 72 revisions

August 2016 Open MPI Developer's Meeting

Logistics:

  • Start: 9am, Tue Aug 16, 2016
  • Finish: 1pm, Thu Aug 18, 2016
  • Location: IBM facility, Dallas, TX
  • Attendance fee: $50/person, see registration link below

Location:

Attendees

Please both register at EventBrite ($50/person) and add your name to the wiki list below if you are coming to the meeting:

  1. Jeff Squyres, Cisco
  2. Howard Pritchard, LANL
  3. Geoffrey Paulsen, IBM
  4. Ralph Castain, Intel
  5. George Bosilca, UTK (17 and 18)
  6. Josh Hursey, IBM
  7. Edgar Gabriel, UHouston
  8. Takahiro Kawashima, Fujitsu
  9. Shinji Sumimoto, Fujitsu
  10. Brian Barrett, Amazon Web Services
  11. Nathan Hjelm, LANL
  12. Sameh Sharkawi, IBM (17 and 18)
  13. Mark Allen, IBM
  14. Josh Ladd, Mellanox (17)
  15. ...please fill in your name here if you're going to attend...

Topics

  • Annual git committer audit

  • Plans for v2.1.0 release

    • Need community to contribute what they want in v2.1.0
    • Want to release by end of 2016 at the latest
  • Present information about IBM Spectrum MPI, processes, etc.

    • May have PR's ready to discuss requested changes, but schedule is tight in July / August for us.
  • MTT updates / future direction

  • How to help alleviate "drowning in CI data" syndrome?

    • One example: https://github.com/open-mpi/ompi/pull/1801
    • One suggestion: should we actively market for testers in the community to help wrangle this stuff?
    • If Jenkins detects an error, can we get Jenkins to retry the tests without the PR changes, and then compare the results to see if the PR itself is introducing a new error?
    • How do we stabilize Jenkins to alleviate all these false positives?
  • PMIx roadmap discussions

  • Thread-safety design

    • Need some good multi-threaded performance tests (per Nathan and Artem discussion)
      • Do we need to write them ourselves?
    • Review/define the path forward
  • Fujitsu status

    • Memory consumption evaluation
    • MTT status
    • PMIx status
  • Revive btl/openib memalign hooks?

  • Request completion callback and thread safety

  • Discuss appropriate default settings for openib BTL

    • Email thread on performance conflicts between RMA/openib and SM/Vader
  • Ralph offers to give presentation on "Flash Provisioning of Clusters", if folks are interested

  • Cleanup of exposed internal symbols (see https://github.com/open-mpi/ompi/pull/1955)

  • Performance Regression tracking

  • Symbol versioning

    • Per request from Debian
  • What to do about MPI_Info PR from IBM / MPI Forum gyrations about MPI_Info?

  • Should we be using Slack.com as a community?

Already discussed

  • Status of v2.0.1 release

    • Lots of PRs still...
    • From the meeting:
      • Closing in on v2.0.1. Most PRs are in. Release next Tuesday (Aug 23, 2016) if possible
  • After v2.1.0 release, should we merge from master to the v2.x branch?

    • Only if there are no backwards compatibility issues (!)
    • This would allow us to close the divergence/gap from master to v2.x, but keep life in the v2.x series (which is attractive to some organizations)
    • Alternatively, we might want to fork and create a new 3.x branch.
    • From the meeting:
      • Long discussion. There seems to be two issues:
  • Migration to new cloud services update for website, database, etc.

    • DONE:
      • DNS:
        • All 6 domains transferred to Jeff's GoDaddy account
      • Web site:
      • Mailing lists:
        • Migrate mailman lists to NMC
        • Freeze old mailing list archives, add to ompi-www git
        • Add old mailing list archives to mail-archive.com
        • Setup new mails to archive to mail-archive.com
      • Email
        • Setup 2 email legacy addresses: rhc@ and jjhursey@
      • Infrastructure
        • Nightly snapshot tarballs being created on RHC's machine and SCPed to www.open-mpi.org
      • Github push notification emails (i.e., "gitdub")
        • Converted Ruby gitdub to PHP
        • Works for all repos... except ompi-www (due to memory constraints)
          • Might well just disable git commit emails for ompi-www
      • Contribution agreements
    • Still to-do:
      • Web site:
        • Probably going to shut down the mirroring problem.
        • Possibly host the tarballs at Amazon S3 and put CloudFront in front of them
      • Spin up an Amazon EC instance (thank you Amazon!) for:
        • Hosting Open MPI community Jenkins master
        • Hosting Open MPI community MTT database and web server
      • Revamp / consolidate: ompi master:contrib/ -- there's currently 3 subdirs that should really be disambiguated and overlap removed. Perhaps name subdirs by the DNS name where they reside / operate?
        • infrastructure
        • build server
        • nightly
      • Spend time documenting where everything is / how it is setup
      • Fix OMPI timeline page: https://www.open-mpi.org/software/ompi/versions/timeline.php
      • Possible umbrella non-profit organization
      • Update Open MPI contrib agreements
        • Created a new contributions@lists. email address, will update agreements
  • MCA support as a separate package?

    • Now that we have multiple projects (PMIx) and others using MCA plugins, does it make sense to create a separate repo/package for MCA itself? Integrating MCA into these projects was modestly painful (e.g., identifying what other infrastructure - such as argv.h/c - needs to be included) - perhaps a more packaged solution will make it simpler.
    • Need to "tag" the component libraries with their project name as library confusion is becoming more prevalent as OMPI begins to utilize MCA-based packages such as PMIx
    • From the meeting:
      • The need for this has gone down quite a bit: PMIx copied and renamed, Warewulf is going to go python.
      • But it seems worthwhile to take the next few steps in spreading the project name throughout the MCA system:
        • Put the project name in the component filename: mca_PROJECT_FRAMEWORK_COMPONENT.la
        • Add some duplicate-checking code in the MCA var base: if someone sets a value for FRAMEWORK_COMPONENT_VAR, and there's more than one of those (i.e., the same framework/component/var in two different projects, and the project name was not specified), the we need to error and let a human figure it out.
  • Plans for folding ompi-release Github repo back into ompi Github repo

  • (Possibly) Remove atomics from OBJ_RETAIN/OBJ_RELEASE in the THREAD_SINGLE case.

  • Continue --net mpirun CLI option discussion from Feb 2016 meeting

    • Originally an IBM proposal.
    • Tied to issues of "I just want to use network X" user intent, without needing to educate users on the complexities of PML, MTL, BTL, COLL, ...etc.
    • We didn't come to any firm conclusions in February.
    • From the meeting:
      • There was a long discussion about this in the meeting; see the meeting minutes for more detail.
  • MPI_Reduce_Local - move into coll framework.

    • From the meeting:
      • It isn't in the coll framework already simply because it isn't a collective.
      • But IBM would like to have multiple backends to MPI_REDUCE_LOCAL
      • The OMPI Way to do this is with a framework / component
      • Seems like overkill to have a new framework just for this one MPI function
      • So it seems ok to add it to the coll framework
Clone this wiki locally