/*<说明> 编程实现将字符串中最短的单词输出,在主函数中输入字符串,编写一个函数完成最短单词的查找 </说明>*/ #include<time.h> #include<iostream> using namespace std; void shortestWord(char* in) { int i,j=0; int o[1000]; for(i=0;*(in+i)!=0;i++) { if(*(in+i)==' ') { o[j]=i; //o的作用是定位每一个空格的位置 通过空格位置间隔的大小判断单词的长短 j++; } } j--; int z=0; int k=o[0]; //k是最短单词的个数 初始化为第一个空格的位置 因为后面的操作没有考虑第一个 int l[50]; for(;j!=0;j--) { if(o[j]-o[j-1]-1<k) { z=0; k=o[j]-o[j-1]-1; l[0]=j-1; } else if(o[j]-o[j-1]-1==k) { z++; //z统计有多少个最短单词 l[z]=j-1; } } if(o[0]==k) { for(int n=0;n<k;n++) { printf("%c",*(in+n)); //如果第一个单词就是最短的 打印 } printf(" "); } for(int m=z;m>0;m--) { for(int n=1;n<=k;n++) { printf("%c",*(in+o[l[m]]+n)); //打印其他最短单词 } printf(" "); } } void main() { char in[1000]="Learning a the parameters of neural networks is perhaps one of the most well studied problems within the field of machine learning. Early work on backpropagation algorithms showed that the gradient of the neural net learning objective could be computed efficiently and used within a gradient descent scheme to learn the weights of a network with multiple layers of non-linear hidden units. Unfortunately, this technique doesn’t seem to generalize well to networks that have very many hidden layers (i.e. deep networks). The common experience is that gradient-descent progresses extremely slowly on deep nets, seeming to halt altogether before making significant progress, resulting in poor performance on the training a set (under-fitting)"; int a=clock(); shortestWord(in); int b=clock(); int c=b-a; printf("%d",c); getchar(); }
上面是自己写的代码 效果并不好 测试了一下运行效果2毫秒 太慢 而且没有考虑有连续空格的情况。
/*<书上答案>*/ #include<iostream> #include<time.h> using namespace std; const int Max=200; char *findshort(char s[]) { static char s1[Max]; //其地址要返回,所以设计为静态变量 char s2[Max]; int i=0,j,len1=0,len2=0; while(s[i++]!='