Skip to content
This repository was archived by the owner on Jul 2, 2021. It is now read-only.

fix travis with Chainerv5 #717

Merged
merged 2 commits into from
Oct 29, 2018
Merged

fix travis with Chainerv5 #717

merged 2 commits into from
Oct 29, 2018

Conversation

yuyu2172
Copy link
Member

tests/links_tests/model_tests/ssd_tests/test_multibox_loss.py:141: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
communicator_name = 'naive', mpi_comm = None, allreduce_grad_dtype = None
    def create_communicator(
            communicator_name='hierarchical', mpi_comm=None,
            allreduce_grad_dtype=None):
        """Create a ChainerMN communicator.
    
        Different communicators provide different approaches of communication, so
        they have different performance charasteristics. The default communicator
        ``hierarchical`` is expected to generally perform well on a variety of
        environments, so one need not to change communicators in most cases.
        However, choosing proper communicator may give better performance.
        The following communicators are available.
    
        +---------------+---+---+--------+--------------------------------------+
        |Name           |CPU|GPU|NCCL    |Recommended Use Cases                 |
        +===============+===+===+========+======================================+
        |pure_nccl      |   |OK |Required|``pure_nccl`` is recommended when     |
        |               |   |   |(>= v2) |NCCL2 is available in the environment.|
        +---------------+---+---+--------+--------------------------------------+
        |hierarchical   |   |OK |Required|Each node has a single NIC or HCA     |
        +---------------+---+---+--------+--------------------------------------+
        |two_dimensional|   |OK |Required|Each node has multiple NICs or HCAs   |
        +---------------+---+---+--------+--------------------------------------+
        |single_node    |   |OK |Required|Single node with multiple GPUs        |
        +---------------+---+---+--------+--------------------------------------+
        |flat           |   |OK |        |N/A                                   |
        +---------------+---+---+--------+--------------------------------------+
        |naive          |OK |OK |        |Testing on CPU mode                   |
        +---------------+---+---+--------+--------------------------------------+
    
        Args:
            communicator_name: The name of communicator (``naive``, ``flat``,
              ``hierarchical``, ``two_dimensional``, ``pure_nccl``, or
              ``single_node``)
            mpi_comm: MPI4py communicator
            allreduce_grad_dtype: Data type of gradient used in All-Reduce.
              If ``None``, the dtype of a model is used.
    
        Returns:
            ChainerMN communicator that implements methods defined in
            :class:`chainermn.CommunicatorBase`
    
        """
    
        if mpi_comm is None:
            try:
                import mpi4py.MPI
            except ImportError as e:
>               raise ImportError(str(e) + ": "
                                  "ChainerMN requires mpi4py for "
                                  "distributed training. "
                                  "Please read the Chainer official document "
                                  "and setup MPI and mpi4py.")
E               ImportError: No module named mpi4py.MPI: ChainerMN requires mpi4py for distributed training. Please read the Chainer official document and setup MPI and mpi4py.

@yuyu2172 yuyu2172 added this to the 0.11 milestone Oct 25, 2018
@yuyu2172 yuyu2172 changed the title fix test with Chainerv5 fix travis with Chainerv5 Oct 25, 2018
This was referenced Oct 25, 2018
Copy link
Member

@Hakuyume Hakuyume left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Hakuyume Hakuyume merged commit 4ca0db1 into chainer:master Oct 29, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants