Simultaneous logins

Kevin Kadow (kadokev@ripco.com)
Tue, 4 Mar 1997 00:57:00 -0600 (CST)

Robert Hiltibidal <morgan@tekfront.com> wrote:
> On Mon, 3 Mar 1997, Walter wrote:
> > >But he's right and explained why. RADIUS isn't set up to stop multiple
> > logins.
>
> Ok folks...enuff is enuff. No software package is ever complete for 100%
> of all situations in the field. Limiting users to one login can be done
> in perl or expect or c++ or even in unix. For nt users you're probally
> limited to perl or c++. In nt I doubt visual basic can handle it. Rather
> than spend lots of bandwidth discussing what names to call each other why
> not invest 60$ in a book and code what you need? Everyone associated in
> the group has to have had some type of computer experience and a simple
> language like perl won't be that hard to pick up. And yes perl is
> avaliable for nt. Check ftp.uu.net/gnu as one source and I'm told
> www.perl.com as another site.

Or look carefully in any search engine, and find one of the many scripts
people have written and published that do this. My 'pmwhoall.pm' can
be changed to do this with a few minutes work, the only reason I haven't
published the script _with_ the changes is because there's a lot of
policy decisions that shape the code- like do you kick off ALL sessions
when you find a duplicate, just the oldest, just the newest, do you
exempt all PPP customers, or just ISDN customers? etc...

Heck, pay me my usual rate, tell me what behavior you want, and I'll
change 'pmwhoall.pm' for you. Let me publish the custom version and I'll
give you a discount!

> Now... lets consider the pros + cons of a multipe login loggerout....
>
> Situation 1, and isp has one portmaster. We have our fancy multiple login
> scan program (fmlsp) and lets say we're really good and code it in c++.
> Lets say we set it up in a cron job for unix/linux and we have it running
> every 15 minutes. That means every 15 minutes that program is going to
> open data files, find out who is supposed to be on the system and then
> close datafiles. This happens every 15 minutes 24 hours a day and 7 days
> a week.

Let's say we're really good, grab bpmtools, code it in PERL and get it
written in 15 minutes...

> Don't know about you folks but I wouldn't want to put that kind of demand
> on my hard drives.

Who said anything about writing data files? And running a job every 15
minutes is not exactly "that kind of demand" on a PC running Unix.

> Situation 2, assume 2 portmasters and again assume c++. Those who do
> coding will know why c++ or any binary code is preferable...takes up less
> memory and operates twice as fast if not faster..... Now we have a
> decision to make.. do we make 2 programs or do we make 1 large one.. ever

Same as above, one little PERL daemon that runs at boot and wakes every
15 minutes, scans the portmasters, and kicks off duplicates. I do exactly
this, the load it adds to the Unix system and portmasters is unnoticable.

> constant is the demand on system resources. And that ladies...is the crux
> of the problem. How much demand on system resources needs to exist?

PERL is fine if you do all the startup and interpretation once (at system boot)
One MB of RAM and a few cycles four times an hour is not exactly a big load.

> Time to digress a bit....
>
> A quick ping on a customer logging in puts their response time at 138 ms.
> This is from the server thru the portmaster to their modem. If I ping a
> host on the ethernet I get 1.0865 ms response. So that means the
> portmaster is tacking on about 135 ms thru its routing of traffic.
> Anybody have clients who play online games like quake or mechwarrior or
> diablo? Imagine then their response to having their latency time
> increased maybe 200x because the portmaster has to do a constant check of
> who's on..

Bullshit.

The PORTMASTER is _NOT_ "tacking on about 135 ms thru routing of traffic",
99.9% of the lag is due to the modem and asynchronous serial port. If the
portmaster had to check who is one, it would only do so at login, and
might add a second to the time it takes to initially log in, but would
CAUSE NO ADDITIONAL DELAY ON ESTABLISHED CIRCUITS.

Regardless, having the portmaster talk to other portmasters in order to
prevent multiple logins (something like rwhod?) is evil for other reasons.

> Perhaps the easiest and most direct way of handling multiple logins is
> simply to meter the clients service. RADIUS will log both concurrent
> logins meaning time is kept for the two sessions and added together at
> the end of the month for those lucky enough to raquick....

CPU time is cheap, human time spent on billing (and collecting) is not,
as I tell the users, I'd rather have the resources available for another
user than the extra couple of bucks from the abuser who logs in twice.

> Boils down to this I would not want radius or the portmaster to track
> multiple logins if it seriously affected the latency time.

There is _zero_ reason to believe that tracking multipole logins would
have _any_ affect on latency. If it's done pro-actively at login, it
might take them a little longer to log in, and it's hard to prevent kicking off
a user who doesn't deserve it (due to missing accounting packets, etc).

> I'm not sure if I want to commit a host nt,unix, or linux to constantly
> scan the portmasters just to make sure all the customers are behaving.

If multiple logins are handled retroactively by checking the portmasters
every X minutes and kicking duplicates, it's easier to ensure that only
real abusers are disconnected.

> More important things like web business exists. It really is a
> question of priorities...how much are you willing to spend?

Let's see- a free Unix-like OS on a $900 P133, about two hours to set it up,
if you consider the savings of freeing up a phone line or getting a customer
who shares his account to cough up the payment for a second account, it
should pay for itself in under a year.

> What's it going to get you in return?

The "cost in labor" and hardware is, IMHO, justified by keeping users from
sharing accounts and hogging ports, catching compromised passwords, and
having actual port-utilization statistics.