The Mosh package should be installed on both the client and server. Please find your platform below for installation instructions.
Once creating that principal for SSH service, I used the ktadd -k command to add the keytab file (to be clear, SSH server and Kerberos server are on the same machine) located at /etc/krb5.keytab. The output of sudo klist -ke /etc/krb5.keytab is. The remote host is running a version of SSH which is older than (or as old as) version 1.2.27. There is a flaw in the remote version of this software which allows an attacker to eavesdrop the kerberos tickets of legitimate users of this service, as sshd will set their environment.
This is a standalone OS X package that will work on any supported Macintosh. However, if you are using a package manager such as Homebrew or MacPorts, we suggest using it to get Mosh, for better compatibility and automatic updates.
There is no 'native' mosh executable for Windows available at this time. The Chrome version of Mosh is the easiest way to use mosh on Windows.
Mosh on Cygwin uses OpenSSH and is suitable for Windows users with advanced SSH configurations.
Mosh is not compatible with Cygwin's built-in Windows Console terminal emulation. You will need to run Mosh from a full-featured terminal program such as mintty, rxvt, PuTTY, or an X11 terminal emulator.
Mosh is also available through Guix when installed on other Linux distributions.
The ppa:keithw/mosh-dev PPA tracks the development version of Mosh.
Operating system logos are trademarks or registered trademarks and are displayed for identificationonly. The vendors shown aren't affiliated with and haven't endorsed Mosh.
debian/control
(in Git) includes an authoritative list of build dependencies. Name | Typical package |
---|---|
Perl (5.14 or newer) | perl |
Protocol Buffers | protobuf-compiler, libprotobuf-dev |
ncurses | libncurses5-dev |
zlib | zlib1g-dev |
utempter (optional) | libutempter-dev |
OpenSSL | libssl-dev |
pkg-config
is a build-only dependency on most systems. Note that mosh-client
receives an AES session key as an environment variable. If you are porting Mosh to a new operating system, please make sure that a running process's environment variables are not readable by other users. We have confirmed that this is the case on GNU/Linux, OS X, and FreeBSD.
New to our HPC systems, or need a quick refresher? This page will help you get started.
Click here for instructions on obtaining an HPC systems account.There is also a video tutorial on Getting an Account.
Prior to requesting access to any of the systems at one or more of the DoD Supercomputing Resource Centers (DSRCs), a user must register with the HPCMP (commonly referred to as applying for a pIE account).
To register with the HPCMP:
At this point, your pIE user account will be rejected or approved by your S/AAA. Once your S/AAA completes this action, you will receive a pIE notification informing you of the status of your application. You can contact the HPC Help Desk by e-mail help@helpdesk.hpc.mil or by phone 1-877-222-2039 at any time throughout this process to determine your account status.
If your pIE user account is rejected, talk to your S/AAA or alternate S/AAA for additional information and discuss re-submitting your request.
If your pIE user account is approved and your Preferred Kerberos Realm is HPCMP.HPC.MIL, your information will be sent to the HPC Help Desk to complete your HPCMP account.
If your pIE user account is approved and you only plan to run on the Open Research Service, your information will be sent to Engineer Research and Development Center (ERDC) DoD Supercomputing Resource Center (DSRC) to complete your HPCMP account.
To complete and activate your pIE user account with a Preferred Kerberos Realm of HPCMP.HPC.MIL:
It is recommended that you have your Security Office send your Visit Request to ERDC Security as soon as you apply for an account in pIE. This may help to expedite activation of your account. This single visit request will suffice for your pIE account and access to HPCMP resources.
NOTE: A Visit Request is a vehicle to transmit personal (Privacy Act) information from one security office to another, and is used for the purpose of HPC Accounts only.
If you require a YubiKey (i.e., if you don't have a Common Access Card / CAC):
To complete and activate your pIE user account to run on the Open Research Service only, you must::
After all of the above is complete, your user account will be activated within pIE. Additional steps must be taken in order for you to access HPC resources at the DoD Supercomputing Resource Centers (DSRCs). Please work directly with your S/AAA to get access to these resources.
When you no longer need access to HPC resources, you must return your YubiKey to the HPC Help Desk.
HPC Help Desk
2435 Fifth St
ATTN: HPC Accounts
WPAFB OH 45433-7802
The HPCMP employs a network authentication protocol called Kerberos to authenticate user access to many of its resources, including all of its HPC systems, and many of its web sites. Kerberos provides strong authentication for client/server applications by using secret-key cryptography. Accessing a Kerberos-protected, or 'Kerberized' system, requires an electronic Kerberos 'ticket,' which may be obtained using an HPCMP Kerberos Client Kit or through the HPC Portal. Both methods require either a DoD Common Access Card (CAC) or a YubiKey.
Note: Regardless of which method you choose, before you can use your CAC to obtain a Kerberos ticket, you must first have CAC enablers such as ActivIdentity/ActivClient (Windows only) or CACKey (Linux) installed on your local system. Mac systems starting with 10.12 do not need 3rd-party CAC enablers. Refer to the Kerberos FAQ: Where do I get CAC Enablers (middleware)? for guidance on installing these.
For assistance changing your Kerberos password, if you know your password, and it still works, see How do I change my Kerberos password? * in the Kerberos FAQ *. If you don't know your password, or if it does not work, contact help@helpdesk.hpc.mil.
For administrators, the source code for the Kerberos client and server kits is available on the Kerberos Source Downloads * page. Users should not attempt to compile from source unless directed to do so by the HPC Help Desk.
Users who have installed an HPCMP Kerberos Client Kit and who have a Kerberos ticket may then access many systems via a simple Kerberized ssh, as follows:
For some systems, however, you may have to specify a numbered login node. Please review the table below, to get specific system login information.
System | Login | Center |
---|---|---|
Centennial | centennial.arl.hpc.mil | ARL |
Gaffney | gaffney.navydsrc.hpc.mil | NAVY |
Koehr | koehr.navydsrc.hpc.mil | NAVY |
Mustang | mustang.afrl.hpc.mil | AFRL |
Onyx | onyx.erdc.hpc.mil | ERDC |
Information about installing Kerberos clients on your Windows desktop can be found in the Kerberos & Authentication section of this page.
A video tutorial is available on logging into a system *.
Information about the HPC Portal may be found on the HPC Portal page.
The HPCMP Centers Team provides an assortment of classified and unclassified computational, storage, visualization, and support resources for DoD scientists and engineers. Please select the Systems tab in the main menu bar to find detailed information about the equipment we make available to users.
While the specific computing environment on our HPC systems may vary by vendor, architecture, and the DoD Supercomputing Resource Center (DSRC) at which the systems are located, we provide certain common elements to help create a similar user experience as you move from system to system within our Program. These elements include environment variables, modules, math libraries, performance and profiling tools, high productivity languages, and others.
Each HPC system consists broadly of a set of login nodes, compute nodes, home directory space, and working directory (scratch) space, along with a large suite of software tools and applications. Access to HPC systems is typically gained through the use of a command line within a secure shell (ssh) instance. Specific authentication and login steps are provided in the Kerberos & Authentication section of this page.
Each DSRC operates a similar petascale mass storage system for long-term data storage, and users have the option of storing vital data files at an off-site disaster recovery facility. We also provide short-term storage on HPC systems themselves and on a Center-wide File System (CWFS) located at each DSRC.
Need to compile your own source code instead of using the COTS and GOTS applications available on our HPC systems? No problem. Each HPC system offers multiple compiler choices for users. Available compilers and instructions for using them are provided in section '5.2 Available Compilers' of each HPC system's User Guide. The User Guide for a particular system is located on the Systems page; just click the Systems link in the main menu bar above, then navigate to the system of interest and look for the User Guide in the Available Documentation box.
In order to manage the volume of work demanded of HPCMP supercomputers, the HPC Centers Team employs a batch queuing system, PBS Pro, for workload management. The batch system makes use of queues, which hold a job until sufficient resources are available to execute the job. The characteristics of the queues found on each HPC system may vary, depending upon the size of the system, the type of workload for which it is optimized, the size of the job, and the priority of the work. To see details of the queues on specific HPC Systems, select the system of interest from the Systems menu in the main menu bar. Look for the 'Queue Descriptions and Limits' box.
In a typical workflow, a user submits a job to a queue, and then at a future time when resources are available, a scheduler dispatches the job for execution. Upon completion, the job ends and relevant files are collected and deposited in a location specified by the user. The user generally has no control over when the job starts. If such control is needed, the HPC Centers Team provides the Advance Reservation Service (ARS), which allows users to choose a future time at which the job is guaranteed to run. Note, however, that the number of CPUs dedicated to ARS is limited.
The priority assigned to each queue is dictated by the priority of the work the queue is allowed to run. DoD Service/Agency computational projects may have different types of accounts and may run at different priorities. All foreground and background usage will be tracked by project and subproject and reported to pIE by subproject.
Queue Name/Priority | Type of Queue | Available To |
---|---|---|
Standard | Allows users to run in foreground at standard priority. | All Users |
Background | Allows users to run in background at lowest priority without charging the user's allocation. Impact on foreground usage is minimal. Some accounts may have background-only allocation, if they have no other allocation on that system. | All Users |
Debug | Allows user to run short jobs at very high priority for program development and testing. | All Users |
Frontier | Reserved for users/projects who received Frontier priority allocation via a proposal review process. | Frontier Users Only |
High-priority | Reserved for high-priority, time-critical jobs on a regular or recurring basis. | User works with Service Agency Approval Authority (S/AAA) to request special permission. |
Urgent | Reserved for high priority, single time-sensitive events arising from an unexpected need requiring faster-than-normal turnaround and special handling. Jobs run at highest priority on system. | User works with Service Agency Approval Authority (S/AAA) to request special permission. |
In order to manage the volume of work demanded of HPCMP supercomputers, the HPC Centers Team employs a batch queuing system, PBS Pro, for workload management. To run a job in a typical workflow, a user submits a job to a queue, and then at a future time when resources are available, a scheduler dispatches the job for execution. Upon completion, the job ends and relevant files are collected and deposited in a location specified by the user. The user generally has no control over when the job starts. If such control is needed, the HPC Centers Team provides the Advance Reservation Service (ARS), which allows users to choose a future time at which the job is guaranteed to run. Note, however, that the number of CPUs dedicated to ARS is limited.
Batch jobs are controlled by scripts written by the user and submitted to the batch queuing system that manages the compute resource and schedules the job to run based on a set of policies. Batch scripts consist of two parts:
1) a set of directives that describe your resource requirements (time, number of processors, etc.) and
2) UNIX commands that perform your computations.
These UNIX command may create directories, transfer files, etc.; anything you can type at a UNIX shell prompt.
Please refer to each HPC system's PBS Guide for details regarding the format and execution of batch scripts. The PBS Guide for a particular system is located on the Systems page; just click the Systems link in the main menu bar above, then navigate to the system of interest and look for the PBS Guide in the Available Documentation box.