免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
最近访问板块 发新帖
楼主: scream
打印 上一主题 下一主题

最近对“并行计算”集群着迷,希望高手指点迷经 [复制链接]

论坛徽章:
0
1 [报告]
发表于 2005-08-20 14:30 |显示全部楼层

最近对“并行计算”集群着迷,希望高手指点迷经

原帖由 "scream" 发表:
HPC->;BEOWULF
MPI->;MPICH
LINUX->;NFS、NIS、NAT




HPC=High Performance Computing

HPC=host based computing + clustering computing

clustering computing= filed specific clustering technology + beowulf

HPC x->;x beowulf

MPI <>; MPICH , do you really know it ?

MPICH is just only a simple way to implement mpi protocol and execute your program parallelly.

NFS NIS is used by HPC(beowulf) for multi accessing and shared home directory and program directory.

NFS and NIS will not be involved into computing itself.

PVFS and Lustre are the clustering filesystem for high perforamnce computing..

NAT is nothing to HPC(beowulf)

论坛徽章:
0
2 [报告]
发表于 2005-08-20 14:31 |显示全部楼层

最近对“并行计算”集群着迷,希望高手指点迷经

[quote]原帖由 "kaolaok"]我做过一个64节点的.[/quote 发表:


oh? really? good.

论坛徽章:
0
3 [报告]
发表于 2005-08-20 19:40 |显示全部楼层

最近对“并行计算”集群着迷,希望高手指点迷经

[quote]原帖由 "scream"]dell的HPC的网络连接方式和其他的是不是不一样[/quote 发表:


dell has it's own HPC solutions?  No, i don't think so.

dell is just a box seller, as i know from a friend who was working on hpc project with dell, they only can provide you server, storage, network , no integrated opensource hpc solution, no benchmarking, no evaluation, no demostration, no consulting service.....  all you get are bunch of h/w with dell Logo.


beowulf hpc cluster has FOUR network interconnect solutions:

1. GigaE(ethernet) network
2. Myrinet (from myricom) network
3. Infiniband  network
4. Quardrics(ELAN) network.

论坛徽章:
0
4 [报告]
发表于 2005-08-20 19:43 |显示全部楼层

最近对“并行计算”集群着迷,希望高手指点迷经

[quote]原帖由 "kaolaok"]64台dell poweredge1850, 一台2850做管理节点用。至于dell交换机的性能嘛,我就不说了![/quote 发表:


I'm really doubt the mpi performance of such configure.

it is well know that normal ethernet has vital bottleneck on latency performance during hpc environment.

IMHO, 32node hpc cluster with ethernet interconnect has reach to it's limitation. i can't imagine a bunch 64 node hpc cluster can work well with a single computing image to end user.

how do you caculate the MPI latency of such configuration?
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP