Opened 6 weeks ago

#62554 new enhancement

Let ports specify expected memory use per job

Reported by: ryandesign (Ryan Schmidt) Owned by:
Priority: Normal Milestone:
Component: base Version:
Keywords: Cc:


MacPorts sets the default value of via this code from portbuild.tcl:

proc portbuild::build_getjobs {args} {
    global buildmakejobs
    set jobs $buildmakejobs
    # if set to '0', use the number of cores for the number of jobs
    if {$jobs == 0} {
        try -pass_signal {
            set jobs [sysctl hw.activecpu]
        } catch {{*} eCode eMessage} {
            set jobs 2
            ui_warn "failed to determine the number of available CPUs (probably not supported on this platform)"
            ui_warn "defaulting to $jobs jobs, consider setting buildmakejobs to a nonzero value in macports.conf"

        try -pass_signal {
            set memsize [sysctl hw.memsize]
            if {$jobs > $memsize / (1024 * 1024 * 1024) + 1} {
                set jobs [expr {$memsize / (1024 * 1024 * 1024) + 1}]
        } catch {*} {}
    if {![string is integer -strict $jobs] || $jobs <= 1} {
        set jobs 1
    return $jobs

In other words it uses the number of active hyperthreaded CPU cores or the amount of RAM in GiB plus one, whichever is less. The assumption is that each job will not need more than about 1 GiB of RAM.

This assumption is wrong for some ports like py-tensorflow. Instead of making those ports recreate the above calculation to arrive at an acceptable number of jobs, let's make the assumption configurable. Introduce a new port option (perhaps build.expected_memory_use_per_job) set to a default value of 1024. Use it in the above calculation and let ports override it so that py-tensorflow could set it to e.g. 2560 instead.

Change History (0)

Note: See TracTickets for help on using tickets.