the distribution having p.d.f.

f(x)=(exp(x)*a^x)/x! x=0,1,2,... a>=0
find the maximum liklihood estimator of a

To find the maximum likelihood estimator (MLE) of parameter "a" for the given probability density function (PDF), we need to maximize the likelihood function.

The likelihood function is obtained by taking the product of the PDF evaluated at each observed data point. Let's assume we have n observations: x₁, x₂, ..., xₙ.

The likelihood function (L) is given by:

L(a) = f(x₁) * f(x₂) * ... * f(xₙ)

Taking the logarithm of the likelihood function (log-likelihood) helps simplify the calculations and does not change the location of the maximum. So, we take the natural logarithm of both sides:

log L(a) = log f(x₁) + log f(x₂) + ... + log f(xₙ)

Now, let's substitute the given PDF function into the log-likelihood:

log L(a) = log[(exp(x₁) * a^x₁) / x₁!] + log[(exp(x₂) * a^x₂) / x₂!] + ... + log[(exp(xₙ) * a^xₙ) / xₙ!]

Using logarithmic properties, we can simplify further:

log L(a) = (x₁ * log a + x₁ * log e) - log x₁! + (x₂ * log a + x₂ * log e) - log x₂! + ... + (xₙ * log a + xₙ * log e) - log xₙ!

log L(a) = (x₁ + x₂ + ... + xₙ) * log a + (log e) * (x₁ + x₂ + ... + xₙ) - (log x₁! + log x₂! + ... + log xₙ!)

Now, we want to find the value of "a" that maximizes the log-likelihood function. To do this, we take the derivative of the log-likelihood function with respect to "a" and set it equal to zero.

d/d(a) [log L(a)] = 0

Differentiating the log-likelihood function, we get:

(x₁ + x₂ + ... + xₙ) / a - 0 = 0

Simplifying, we find:

(x₁ + x₂ + ... + xₙ) = n * a

Therefore, the maximum likelihood estimator of "a" is given by:

a = (x₁ + x₂ + ... + xₙ) / n

In other words, the maximum likelihood estimator of "a" is the arithmetic mean of the observed data points.