请选择 进入手机版 | 继续访问电脑版

计算化学公社

 找回密码
 现在注册!
查看: 1363|回复: 17

[Gaussian/gview] Gaussian16在售卖了

[复制链接]

83

帖子

1

威望

1453

eV
积分
1556

Level 5 (御坂)

发表于 2017-1-12 09:06:24 | 显示全部楼层 |阅读模式
官网放消息了 Gaussian 16 可算是要出来了 坐等~

Gauss16

Gauss16

评分

参与人数 2eV +6 收起 理由
小龙斯坦 + 3 赞!
greatzdk + 3 好玩儿

查看全部评分

225

帖子

6

威望

729

eV
积分
1074

Level 4 (黑子)

发表于 2017-1-12 09:39:22 | 显示全部楼层
期待激发态势能面和gpu加速。

248

帖子

2

威望

1070

eV
积分
1358

Level 4 (黑子)

发表于 2017-1-12 10:02:18 | 显示全部楼层
新网站也做的非常酷炫

40

帖子

0

威望

189

eV
积分
229

Level 3 能力者

发表于 2017-1-12 13:39:06 | 显示全部楼层
不知什么时候会有下载

798

帖子

0

威望

2040

eV
积分
2838

Level 5 (御坂)

发表于 2017-1-12 14:26:07 | 显示全部楼层
咨询一下老师:
这个高斯16开始卖了吗?linux的多少钱?
相比g09,主要改进在哪些地方?
要是买的话,是什么流程?
因为,我们这边老师让调研一下,想要买这个16.
谢谢指点
有梦想,才有实现。奋斗吧,骄傲的少年。

6

帖子

0

威望

21

eV
积分
27

Level 2 能力者

发表于 2017-1-12 17:04:56 | 显示全部楼层
有网上破解?

121

帖子

0

威望

222

eV
积分
343

Level 3 能力者

发表于 2017-1-12 17:24:10 | 显示全部楼层
是真的吗?

78

帖子

0

威望

1012

eV
积分
1090

Level 4 (黑子)

发表于 2017-1-12 19:14:49 | 显示全部楼层
谢谢分享

119

帖子

0

威望

664

eV
积分
783

Level 4 (黑子)

发表于 2017-1-12 21:13:02 | 显示全部楼层
都17年了~~~~~~~~~~~~~~~

388

帖子

10

威望

2433

eV
积分
3021

Level 5 (御坂)

密度泛函·小卒

发表于 2017-1-13 09:11:38 | 显示全部楼层
从1994年开始算 Gaussian 94
4年后 Gaussian 98
5年后 Gaussian 03
6年后 Gaussian 09
7年后 Gaussian 16
坐等Gaussian 24就好了
东风快递,使命必达!六分钟覆盖全球,哪里不服点哪里!

42

帖子

0

威望

248

eV
积分
290

Level 3 能力者

发表于 2017-1-13 09:47:57 | 显示全部楼层
yjcmwgk 发表于 2017-1-13 09:11
从1994年开始算 Gaussian 94
4年后 Gaussian 98
5年后 Gaussian 03

哈哈,没毛病

64

帖子

0

威望

261

eV
积分
325

Level 3 能力者

发表于 2017-1-13 21:42:54 | 显示全部楼层
强烈期待GPU加速的测试

388

帖子

10

威望

2433

eV
积分
3021

Level 5 (御坂)

密度泛函·小卒

发表于 2017-1-15 21:10:13 | 显示全部楼层
墨灵格的肖经理说还要再等等
东风快递,使命必达!六分钟覆盖全球,哪里不服点哪里!

83

帖子

1

威望

1453

eV
积分
1556

Level 5 (御坂)

 楼主| 发表于 2017-1-15 22:33:00 | 显示全部楼层
大伙儿期待ing

47

帖子

0

威望

1033

eV
积分
1080

Level 4 (黑子)

发表于 2017-1-18 16:44:28 | 显示全部楼层
本帖最后由 hlmkh 于 2017-1-18 16:50 编辑

Gaussian 16 Rev. A.03 Release Notes
New Modeling Capabilities
TD-DFT analytic second derivatives for predicting vibrational frequencies/IR and Raman spectra and performing transition state optimizations and IRC calculations for excited states.
EOMCC analytic gradients for performing geometry optimizations.
Anharmonic vibrational analysis for VCD and ROA spectra: see Freq=Anharmonic.
Vibronic spectra and intensities: see Freq=FCHT and related options.
Resonance Raman spectra: see Freq=ReadFCHT.
New DFT functionals: M08 family, MN15, MN15L.
New double-hybrid methods: DSDPBEP86, PBE0DH and PBEQIDH.
PM7 semi-empirical method.
Adamo excited state charge transfer diagnostic: see Pop=DCT.
The EOMCC solvation interaction models of Caricato: see SCRF=PTED.
Generalized internal coordinates, a facility which allows arbitrary redundant internal coordinates to be defined and used for optimization constraints and other purposes. See Geom=GIC and GIC Info.
Performance Enhancements
NVIDIA K40 and K80 GPUs are supported under Linux for Hartree-Fock and DFT calculations. See the Using GPUs tab for details.
Parallel performance on larger numbers of processors has been improved. See the Parallel Performance tab for information about how to get optimal performance on multiple CPUs and clusters.
Gaussian 16 uses an optimized memory algorithm to avoid I/O during CCSD iterations.
There are several enhancements to the GEDIIS optimization algorithm.
CASSCF improvements for active spaces ≥ (10,10) increase performance and make active spaces of up to 16 orbitals feasible (depending on the molecular system).
Significant speedup of the core correlation energies for W1 compound model.
Gaussian 16 incorporates algorithmic improvements for significant speedup of the diagonal, second-order self-energy approximation (D2) component of composite electron propagator (CEP) methods as described in [DiazTinoco16]. See EPT.
Usage Enhancements
Tools for interfacing Gaussian with other programs, both in compiled languages such as Fortran and C and with interpreted languages such as Python and Perl. Refer to the Interfacing to Gaussian 16 page for details.
Parameters specified in Link 0 (%) input lines and/or in a Default.Route file can now also be specified via either command-line
arguments or environment variables. See the Link 0 Equivalences tab for details.
Compute the force constants are every nth step of a geometry optimization: see Opt=Recalc.

USING GPU:
Gaussian 16 can use NVIDIA K40 and K80 GPUs under Linux. Earlier GPUs do not have the computational capabilities or memory size to run the algorithms in Gaussian 16. Gaussian 16 does not yet support the Tesla-Pascal series.

Allocating sufficient amounts of memory to jobs is even more important when using GPUs than for CPUs, since larger batches of work must be done at the same time in order to use the GPUs efficiently. The K40 and K80 units can have up to 16 GB of memory. Typically, most of this should be made available to Gaussian. Giving Gaussian 8-9 GB works well when there is 12 GB total on each GPU; similarly, allocating Gaussian 11-12 GB is appropriate for a 16 GB GPU. In addition, at least an equal amount of memory must be available for each CPU thread which is controlling a GPU.

When using GPUs, it is essential to have the GPU controlled by a specific CPU. The controlling CPU should be as physically close as possible to the GPU it is controlling. The hardware arrangement on a system with GPUs can be checked using the nvidia-dmi utility. For example, this output is for a machine with two 16-core Haswell CPU chips and three K80 boards, each of which has two GPUs:

GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU  Affinity                  
GPU0    X  PIX  SOC  SOC  SOC  SOC  SOC SOC  0-15     cores on first chip
GPU1  PIX    X  SOC  SOC  SOC  SOC  SOC SOC  0-15                  
GPU2  SOC  SOC    X  PIX  PHB  PHB  PHB PHB  16-31    cores on second chip                  
GPU3  SOC  SOC  PIX    X  PHB  PHB  PHB PHB  16-31                  
GPU4  SOC  SOC  PHB  PHB    X  PIX  PXB PXB  16-31                  
GPU5  SOC  SOC  PHB  PHB  PIX    X  PXB PXB  16-31                  
GPU6  SOC  SOC  PHB  PHB  PXB  PXB    X PIX  16-31                  
GPU7  SOC  SOC  PHB  PHB  PXB  PXB  PIX   X  16-31                  
The important part of this output is the CPU affinity. This example shows that GPUs 0 and 1 (on the first K80 card) are connected to the CPUs on chip 0 while GPUs 2-5 (on the other two K80 cards) are connected to the CPUs on chip 1.

The GPUs to use for a calculation and their controlling CPUs are specified with the %GPUCPU Link 0 command. This command takes one parameter:

%GPUCPU=gpu-list=controlling-cpus                  
where gpu-list is a comma-separated list of GPU numbers, possibly including numerical ranges (e.g., 0-4,6), and controlling-cpus is a similarly-formatted list of controlling CPU numbers. To continue with the same example, a job which uses all the CPUs—20 CPUs doing parts of the computation and 6 controlling GPUs—would use the following Link 0 commands:

%CPU=0-31                  
%GPUCPU=0,1,2,3,4,5=0,1,16,17,18,19                  
This pins threads 0-31 to CPUs 0-31 and then uses GPU0 controlled by CPU 0, GPU1 controlled by CPU 1, GPU2 controlled by CPU 16, and so on. Note that the controlling CPUs are included in %CPU. The GPU and CPU lists could be expressed more tersely as:

%CPU=0-31                  
%GPUCPU=0-5=0-1,16-19                  
Normally one uses consecutive numbering in the obvious way, but things can be associated differently in special cases. For example, suppose on the same machine one already had one job using 6 CPUs running with %CPU=16-21. Then if one wanted to use the other 26 CPUs with 6 controlling GPUs one would specify:

%CPU=0-15,22-31                  
%GPUCPU=0-5=0-1,22-25                  
This would create 26 threads, with GPUs controlled by the threads on CPUs 0, 1, 22, 23, 24 and 25.

GPUs are not helpful for small jobs but are effective for larger molecules when doing DFT energies, gradients and frequencies (for both ground and excited states). They are not used effectively by post-SCF calculations such as MP2 or CCSD. Each GPU is several times faster than a CPU but since on modern machines there are typically many more CPUs than GPUs, it is important to use all the CPUs as well as the GPUs and the speedup from GPUs is reduced because many CPUs are also used effectively. For example, if the GPU is 5x faster than a CPU, then the speedup from going to 1 CPU to 1 CPU + 1 GPU would be 5x, but the speedup going from 32 CPUs to 32 CPUs + 8 GPUs (i.e., 24 CPUs + 8 GPUs) would be equivalent to 24 + 5×8 = 44 CPUs, for a speedup of 44/32 or about 1.4x.

GPUs on nodes in a cluster can be used. Since the %CPU and %GPUCPU specifications are applied to each node in the cluster, the nodes must have identical configurations (number of GPUs and their affinity to CPUs); since most clusters are collections of identical nodes, this is not usually a problem.
http://gaussian.com/relnotes/

评分

参与人数 1eV +5 收起 理由
zsu007 + 5 谢谢分享

查看全部评分

您需要登录后才可以回帖 登录 | 现在注册!

本版积分规则

Archiver|手机版|小黑屋|计算化学公社 — 北京科音旗下高水平计算化学交流论坛 ( 京ICP备14038949-1号 )

GMT+8, 2017-4-30 13:08 , Processed in 0.110098 second(s), 27 queries .

Powered by Discuz! X3.2

© 2001-2013 Comsenz Inc.

快速回复 返回顶部 返回列表