www.delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1995/06/26/22:48:11

To: k DOT ashley AT ulcc DOT ac DOT uk, kagel AT quasar DOT bloomberg DOT com
Cc: djgpp AT sun DOT soe DOT clarkson DOT edu
Subject: Re: Problems opening > 20 files when using FOPEN
Date: Tue, 27 Jun 1995 12:01:40 +1000
From: Peter Horan <peter AT deakin DOT edu DOT au>

Kevin Ashley wrote:
> In article <DAr9Bx DOT KL3 AT jade DOT mv DOT net>, Peter Horan <peter AT deakin DOT edu DOT au> writes:
> |>> 
> |>> If i do a low level open using OPEN command, i can open 34 (ex files open 
> |>> by systemm). BUT if i use FOPEN, i cannot open more than 15 files(streams)
> |>> and get an error msg " too many files open"
> |>> 
> |>In UNIX, the number of file descriptors is limited to 20, numbered 0 to 19. 
> |>This limit is a function of the library. Descriptors 0, 1 and 2 correspond to 
> |>stdin, stdout and stderr. In Microsoft (and Borland? and djgpp?) compilers, 
> |>stdprn and stdaux are also defined and opened by the system leaving you with 
> |>15. 
> |>
> 
> The statement 'in UNIX, the number of file descriptors is limited to 20'
> is incorrect in my experience.

Art S. Kagel, kagel AT ts1 DOT bloomberg DOT com wrote
> On  Mon, 26 Jun 1995  Peter Horan <peter AT deakin DOT edu DOT au> wrote:
> 
>    In UNIX, the number of file descriptors is limited to 20, numbered 0 to 19. 
>    This limit is a function of the library. Descriptors 0, 1 and 2 ...
> 
> In response to Paul Lancette's problem.  This is a common misconception.  UNIX
> DOES NOT limit the number of open file handles to 20.  In both System V and BSD
> the maximum number of open files per process and the maximum number of open
> files for the system as a whole are tunable parameters in the kernel gen.
> Virtually every commercial UNIX site I have seen has increased both the per
> process limit and the system limit.  I even worked at one shop where both were
> set to the maximum aloowable value (this is version dependent but always
> >256/process and >1024 for the system)!  The 20 files per process is simply the
> usual default value and the one delivered with the gen out of the box.

I should have said "In UNIX, the limit _used_to_be_ 20 and C compilers on DOS 
inherited this limit". As djgpp runs in a DOS environment, my answer, I 
believe was reasonable despite my error. I responded in terms of what I knew I 
could assume in _that_ environment and made suggestions for _that_ environment.

To assume otherwise, may lead to problems. If one works in an environment 
where the limit differs, one is free to do as one pleases, but then one is 
assuming that the code will be used only in a compatible environment. (And 
that is also why I did not mention threads).

The limit in solaris (untuned) is 64 including stdin, stdout and stderr, as 
the code below shows.


Peter Horan                     School of Computing and Mathematics
peter AT deakin DOT edu DOT au	   	Deakin University
                                Geelong
+61-52-27 1234 (Voice)          Victoria 3217
+61-52-27 2028 (FAX)            Australia

/**************************************************************

        How may files can be fopen()ed on this system?

        Peter Horan
        June, 1995

***************************************************************/

#include <stdio.h>
#include <stdlib.h>

main()
{
  int i = 0;
  FILE *fd;
  char filename[12], filenamebase[12];
  char *f;
  
  /* Keep opening files until fopen fails */
  do
    {
      /* Overcome this mktemp()'s limit of 26 files with one template */
      if(i%26 == 0)
	{
	  sprintf(filenamebase, "%d", i/26);
	  strcat(filenamebase, "XXXXXX");
	}
      strcpy(filename, filenamebase);
      f = mktemp(filename);
      fd = fopen(f, "w");
      i++;
    }
  while (fd != NULL);
  /* File opening error */

  perror("");
  printf("Max number of open files = %i\n", i - 1);
  printf("Don't forget to delete the files\n");
}

/* End of file */

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019