Feature engineering suggestion required Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsFeature Extraction Technique - Summarizing a Sequence of DataPrepping Data For Usage ClusteringGround-truth and feature extraction for predictive modellingHow to use neural network's hidden layer output for feature engineering?Fix missing data by adding another feature instead of using the mean?What are best practices for collaborative feature engineering?How would knowing spammers email address improve spam detection algorithms?Is this a good practice of feature engineering?How do I develop a system to Recommend a marketing channel using data science?

Why should I vote and accept answers?

Do I really need to have a message in a novel to appeal to readers?

How to write this math term? with cases it isn't working

How to play a character with a disability or mental disorder without being offensive?

Project Euler #1 in C++

How to install press fit bottom bracket into new frame

Do any jurisdictions seriously consider reclassifying social media websites as publishers?

Is it fair for a professor to grade us on the possession of past papers?

Did Deadpool rescue all of the X-Force?

Can a new player join a group only when a new campaign starts?

What is this clumpy 20-30cm high yellow-flowered plant?

What are the diatonic extended chords of C major?

Illegal assignment from sObject to Id

Is there a kind of relay that only consumes power when switching?

A term for a woman complaining about things/begging in a cute/childish way

How would a mousetrap for use in space work?

If Windows 7 doesn't support WSL, then what does Linux subsystem option mean?

Morning, Afternoon, Night Kanji

Amount of permutations on an NxNxN Rubik's Cube

How fail-safe is nr as stop bytes?

How to react to hostile behavior from a senior developer?

How much damage would a cupful of neutron star matter do to the Earth?

SF book about people trapped in a series of worlds they imagine

What's the meaning of "fortified infraction restraint"?



Feature engineering suggestion required



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 00:00UTC (8:00pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsFeature Extraction Technique - Summarizing a Sequence of DataPrepping Data For Usage ClusteringGround-truth and feature extraction for predictive modellingHow to use neural network's hidden layer output for feature engineering?Fix missing data by adding another feature instead of using the mean?What are best practices for collaborative feature engineering?How would knowing spammers email address improve spam detection algorithms?Is this a good practice of feature engineering?How do I develop a system to Recommend a marketing channel using data science?










5












$begingroup$


I am having a problem during feature engineering. Looking for some suggestions. Problem statement: I have usage data of multiple customers for 3 days. Some have just 1 day usage some 2 and some 3. Data is related to number of emails sent / contacts added on each day etc.



I am converting this time series data to column-wise ie., number of emails sent by a customer on day1 as one feature, number of emails sent by a customer on day2 as one feature and so on. But problem is that, the usage can be of either increasing order or decreasing order for different customers.



ie., example 1: customer 'A' --> 'number of emails sent on 1st . day' = 100 . ' number of emails sent on 2nd day'=0



example 2: customer 'B' --> 'number of emails sent on 1st . day' = 0 . ' number of emails sent on 2nd day'=100



example 3: customer 'C' --> 'number of emails sent on 1st . day' = 0 . ' number of emails sent on 2nd day'=0



example 4: customer 'D' --> 'number of emails sent on 1st . day' = 100 . ' number of emails sent on 2nd day'=100



In the first two cases => My new feature will have "-100" and "100" as values. Which I guess is good for differentiating. But the problem arises for 3rd and 4th columns when the new feature value will be "0" in both scenarios Can anyone suggest a way to handle this.



One way to handle this:



I can add "No change" in those scenarios, but I am confused about one thing. If I do that, I will have to make the new feature as categorical, which is not ideal as the other values will be continuous.



Instead, I can have absolute values in the new feature and indicate the trend as "+1" or increasing "-1" for decreasing "no change" for no change and "0" if both the values have been "0". Would that be a good approach though?



The end goal is to predict if a user would continue using the application or not. So it basically would be a two-class model. And I would want to capture even the scale of usage i.e., "A user sending 100 emails every day" should be different from "B user sending 10000 emails every day"










share|improve this question











$endgroup$







  • 1




    $begingroup$
    could you explain a bit better what are you trying to predict? Your question is pretty well explained but the kind of model you plan do train might give some of us better ideas.
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 1:40










  • $begingroup$
    I would want to predict if a user would continue using the application or not. So it basically would be a two-class model. Does that answer?
    $endgroup$
    – SSuram
    Apr 11 at 2:32










  • $begingroup$
    Yes, just add it to your question and it will be perfect
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 2:35















5












$begingroup$


I am having a problem during feature engineering. Looking for some suggestions. Problem statement: I have usage data of multiple customers for 3 days. Some have just 1 day usage some 2 and some 3. Data is related to number of emails sent / contacts added on each day etc.



I am converting this time series data to column-wise ie., number of emails sent by a customer on day1 as one feature, number of emails sent by a customer on day2 as one feature and so on. But problem is that, the usage can be of either increasing order or decreasing order for different customers.



ie., example 1: customer 'A' --> 'number of emails sent on 1st . day' = 100 . ' number of emails sent on 2nd day'=0



example 2: customer 'B' --> 'number of emails sent on 1st . day' = 0 . ' number of emails sent on 2nd day'=100



example 3: customer 'C' --> 'number of emails sent on 1st . day' = 0 . ' number of emails sent on 2nd day'=0



example 4: customer 'D' --> 'number of emails sent on 1st . day' = 100 . ' number of emails sent on 2nd day'=100



In the first two cases => My new feature will have "-100" and "100" as values. Which I guess is good for differentiating. But the problem arises for 3rd and 4th columns when the new feature value will be "0" in both scenarios Can anyone suggest a way to handle this.



One way to handle this:



I can add "No change" in those scenarios, but I am confused about one thing. If I do that, I will have to make the new feature as categorical, which is not ideal as the other values will be continuous.



Instead, I can have absolute values in the new feature and indicate the trend as "+1" or increasing "-1" for decreasing "no change" for no change and "0" if both the values have been "0". Would that be a good approach though?



The end goal is to predict if a user would continue using the application or not. So it basically would be a two-class model. And I would want to capture even the scale of usage i.e., "A user sending 100 emails every day" should be different from "B user sending 10000 emails every day"










share|improve this question











$endgroup$







  • 1




    $begingroup$
    could you explain a bit better what are you trying to predict? Your question is pretty well explained but the kind of model you plan do train might give some of us better ideas.
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 1:40










  • $begingroup$
    I would want to predict if a user would continue using the application or not. So it basically would be a two-class model. Does that answer?
    $endgroup$
    – SSuram
    Apr 11 at 2:32










  • $begingroup$
    Yes, just add it to your question and it will be perfect
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 2:35













5












5








5





$begingroup$


I am having a problem during feature engineering. Looking for some suggestions. Problem statement: I have usage data of multiple customers for 3 days. Some have just 1 day usage some 2 and some 3. Data is related to number of emails sent / contacts added on each day etc.



I am converting this time series data to column-wise ie., number of emails sent by a customer on day1 as one feature, number of emails sent by a customer on day2 as one feature and so on. But problem is that, the usage can be of either increasing order or decreasing order for different customers.



ie., example 1: customer 'A' --> 'number of emails sent on 1st . day' = 100 . ' number of emails sent on 2nd day'=0



example 2: customer 'B' --> 'number of emails sent on 1st . day' = 0 . ' number of emails sent on 2nd day'=100



example 3: customer 'C' --> 'number of emails sent on 1st . day' = 0 . ' number of emails sent on 2nd day'=0



example 4: customer 'D' --> 'number of emails sent on 1st . day' = 100 . ' number of emails sent on 2nd day'=100



In the first two cases => My new feature will have "-100" and "100" as values. Which I guess is good for differentiating. But the problem arises for 3rd and 4th columns when the new feature value will be "0" in both scenarios Can anyone suggest a way to handle this.



One way to handle this:



I can add "No change" in those scenarios, but I am confused about one thing. If I do that, I will have to make the new feature as categorical, which is not ideal as the other values will be continuous.



Instead, I can have absolute values in the new feature and indicate the trend as "+1" or increasing "-1" for decreasing "no change" for no change and "0" if both the values have been "0". Would that be a good approach though?



The end goal is to predict if a user would continue using the application or not. So it basically would be a two-class model. And I would want to capture even the scale of usage i.e., "A user sending 100 emails every day" should be different from "B user sending 10000 emails every day"










share|improve this question











$endgroup$




I am having a problem during feature engineering. Looking for some suggestions. Problem statement: I have usage data of multiple customers for 3 days. Some have just 1 day usage some 2 and some 3. Data is related to number of emails sent / contacts added on each day etc.



I am converting this time series data to column-wise ie., number of emails sent by a customer on day1 as one feature, number of emails sent by a customer on day2 as one feature and so on. But problem is that, the usage can be of either increasing order or decreasing order for different customers.



ie., example 1: customer 'A' --> 'number of emails sent on 1st . day' = 100 . ' number of emails sent on 2nd day'=0



example 2: customer 'B' --> 'number of emails sent on 1st . day' = 0 . ' number of emails sent on 2nd day'=100



example 3: customer 'C' --> 'number of emails sent on 1st . day' = 0 . ' number of emails sent on 2nd day'=0



example 4: customer 'D' --> 'number of emails sent on 1st . day' = 100 . ' number of emails sent on 2nd day'=100



In the first two cases => My new feature will have "-100" and "100" as values. Which I guess is good for differentiating. But the problem arises for 3rd and 4th columns when the new feature value will be "0" in both scenarios Can anyone suggest a way to handle this.



One way to handle this:



I can add "No change" in those scenarios, but I am confused about one thing. If I do that, I will have to make the new feature as categorical, which is not ideal as the other values will be continuous.



Instead, I can have absolute values in the new feature and indicate the trend as "+1" or increasing "-1" for decreasing "no change" for no change and "0" if both the values have been "0". Would that be a good approach though?



The end goal is to predict if a user would continue using the application or not. So it basically would be a two-class model. And I would want to capture even the scale of usage i.e., "A user sending 100 emails every day" should be different from "B user sending 10000 emails every day"







machine-learning feature-engineering data-science-model






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Apr 11 at 2:37







SSuram

















asked Apr 11 at 1:26









SSuramSSuram

264




264







  • 1




    $begingroup$
    could you explain a bit better what are you trying to predict? Your question is pretty well explained but the kind of model you plan do train might give some of us better ideas.
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 1:40










  • $begingroup$
    I would want to predict if a user would continue using the application or not. So it basically would be a two-class model. Does that answer?
    $endgroup$
    – SSuram
    Apr 11 at 2:32










  • $begingroup$
    Yes, just add it to your question and it will be perfect
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 2:35












  • 1




    $begingroup$
    could you explain a bit better what are you trying to predict? Your question is pretty well explained but the kind of model you plan do train might give some of us better ideas.
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 1:40










  • $begingroup$
    I would want to predict if a user would continue using the application or not. So it basically would be a two-class model. Does that answer?
    $endgroup$
    – SSuram
    Apr 11 at 2:32










  • $begingroup$
    Yes, just add it to your question and it will be perfect
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 2:35







1




1




$begingroup$
could you explain a bit better what are you trying to predict? Your question is pretty well explained but the kind of model you plan do train might give some of us better ideas.
$endgroup$
– Pedro Henrique Monforte
Apr 11 at 1:40




$begingroup$
could you explain a bit better what are you trying to predict? Your question is pretty well explained but the kind of model you plan do train might give some of us better ideas.
$endgroup$
– Pedro Henrique Monforte
Apr 11 at 1:40












$begingroup$
I would want to predict if a user would continue using the application or not. So it basically would be a two-class model. Does that answer?
$endgroup$
– SSuram
Apr 11 at 2:32




$begingroup$
I would want to predict if a user would continue using the application or not. So it basically would be a two-class model. Does that answer?
$endgroup$
– SSuram
Apr 11 at 2:32












$begingroup$
Yes, just add it to your question and it will be perfect
$endgroup$
– Pedro Henrique Monforte
Apr 11 at 2:35




$begingroup$
Yes, just add it to your question and it will be perfect
$endgroup$
– Pedro Henrique Monforte
Apr 11 at 2:35










1 Answer
1






active

oldest

votes


















3












$begingroup$

Well, you want to identify change in usage you could try something like:



$$ f(day_1,day2) = fracday_2-day_1 + delta times Biggr|Biggr|fracday_2+day_1(day_2+day_1+1)(day_2-day_1+1)Biggl|Biggl| $$



where $delta$ is the eps of your machine (minimum value needed to be summed to differ it from other floats)



that will give you
$$f(100,0) approx -98.02$$
$$f(0,100) = 100$$
$$f(100,100) approx 0.995$$
$$f(0,0) = 0$$



You can look at my experiment here



This will map all non-changes from $[0,1]$ where $f(0,0)$ maps to $0$ and $f(infty,infty)$ maps to $1$



Where is it from? Just tuned the function manually. But I think this might suffice for your application



Explaining the idea



You want to have a feature that packs a lot of information:
- Is the usage greater than zero?
- Is it increasing or decreasing?
- If it is stalled, how much is the usage?



Well, your usage vary in integer values so you can map the entire non-changing but above 0 case to a previously non-used interval.



The function above will map in $[0,1]$ all non-changing possibilities, in a exponential kind of way ($a^(-frac1usage)$) also you can extract the actual value from positive changes and the approximate value for negative change (been a better approximation when the drop is high)



This is not the perfect scenario but it is the maximum information I could compress into 1 variable with little loss.






share|improve this answer











$endgroup$












  • $begingroup$
    I am not sure if it would answer --- "'''And I would want to capture even the scale of usage i.e., "A user sending 100 emails every day" should be different from "B user sending 10000 emails every day ""''---- part of the question. Could you please explain?
    $endgroup$
    – SSuram
    Apr 11 at 2:38










  • $begingroup$
    What would you say about adding the below info to it f = (((d2-d1+eps)/abs(d2-d1+eps))*abs((d2+d1)/(d1+d2+1)*(d2-d1+1)))*(d2/1000)*(d1/1000) where "1000"-- would be max(usage).
    $endgroup$
    – SSuram
    Apr 11 at 3:02











  • $begingroup$
    that will actually return zero for near every case
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 3:13











Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49088%2ffeature-engineering-suggestion-required%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









3












$begingroup$

Well, you want to identify change in usage you could try something like:



$$ f(day_1,day2) = fracday_2-day_1 + delta times Biggr|Biggr|fracday_2+day_1(day_2+day_1+1)(day_2-day_1+1)Biggl|Biggl| $$



where $delta$ is the eps of your machine (minimum value needed to be summed to differ it from other floats)



that will give you
$$f(100,0) approx -98.02$$
$$f(0,100) = 100$$
$$f(100,100) approx 0.995$$
$$f(0,0) = 0$$



You can look at my experiment here



This will map all non-changes from $[0,1]$ where $f(0,0)$ maps to $0$ and $f(infty,infty)$ maps to $1$



Where is it from? Just tuned the function manually. But I think this might suffice for your application



Explaining the idea



You want to have a feature that packs a lot of information:
- Is the usage greater than zero?
- Is it increasing or decreasing?
- If it is stalled, how much is the usage?



Well, your usage vary in integer values so you can map the entire non-changing but above 0 case to a previously non-used interval.



The function above will map in $[0,1]$ all non-changing possibilities, in a exponential kind of way ($a^(-frac1usage)$) also you can extract the actual value from positive changes and the approximate value for negative change (been a better approximation when the drop is high)



This is not the perfect scenario but it is the maximum information I could compress into 1 variable with little loss.






share|improve this answer











$endgroup$












  • $begingroup$
    I am not sure if it would answer --- "'''And I would want to capture even the scale of usage i.e., "A user sending 100 emails every day" should be different from "B user sending 10000 emails every day ""''---- part of the question. Could you please explain?
    $endgroup$
    – SSuram
    Apr 11 at 2:38










  • $begingroup$
    What would you say about adding the below info to it f = (((d2-d1+eps)/abs(d2-d1+eps))*abs((d2+d1)/(d1+d2+1)*(d2-d1+1)))*(d2/1000)*(d1/1000) where "1000"-- would be max(usage).
    $endgroup$
    – SSuram
    Apr 11 at 3:02











  • $begingroup$
    that will actually return zero for near every case
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 3:13















3












$begingroup$

Well, you want to identify change in usage you could try something like:



$$ f(day_1,day2) = fracday_2-day_1 + delta times Biggr|Biggr|fracday_2+day_1(day_2+day_1+1)(day_2-day_1+1)Biggl|Biggl| $$



where $delta$ is the eps of your machine (minimum value needed to be summed to differ it from other floats)



that will give you
$$f(100,0) approx -98.02$$
$$f(0,100) = 100$$
$$f(100,100) approx 0.995$$
$$f(0,0) = 0$$



You can look at my experiment here



This will map all non-changes from $[0,1]$ where $f(0,0)$ maps to $0$ and $f(infty,infty)$ maps to $1$



Where is it from? Just tuned the function manually. But I think this might suffice for your application



Explaining the idea



You want to have a feature that packs a lot of information:
- Is the usage greater than zero?
- Is it increasing or decreasing?
- If it is stalled, how much is the usage?



Well, your usage vary in integer values so you can map the entire non-changing but above 0 case to a previously non-used interval.



The function above will map in $[0,1]$ all non-changing possibilities, in a exponential kind of way ($a^(-frac1usage)$) also you can extract the actual value from positive changes and the approximate value for negative change (been a better approximation when the drop is high)



This is not the perfect scenario but it is the maximum information I could compress into 1 variable with little loss.






share|improve this answer











$endgroup$












  • $begingroup$
    I am not sure if it would answer --- "'''And I would want to capture even the scale of usage i.e., "A user sending 100 emails every day" should be different from "B user sending 10000 emails every day ""''---- part of the question. Could you please explain?
    $endgroup$
    – SSuram
    Apr 11 at 2:38










  • $begingroup$
    What would you say about adding the below info to it f = (((d2-d1+eps)/abs(d2-d1+eps))*abs((d2+d1)/(d1+d2+1)*(d2-d1+1)))*(d2/1000)*(d1/1000) where "1000"-- would be max(usage).
    $endgroup$
    – SSuram
    Apr 11 at 3:02











  • $begingroup$
    that will actually return zero for near every case
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 3:13













3












3








3





$begingroup$

Well, you want to identify change in usage you could try something like:



$$ f(day_1,day2) = fracday_2-day_1 + delta times Biggr|Biggr|fracday_2+day_1(day_2+day_1+1)(day_2-day_1+1)Biggl|Biggl| $$



where $delta$ is the eps of your machine (minimum value needed to be summed to differ it from other floats)



that will give you
$$f(100,0) approx -98.02$$
$$f(0,100) = 100$$
$$f(100,100) approx 0.995$$
$$f(0,0) = 0$$



You can look at my experiment here



This will map all non-changes from $[0,1]$ where $f(0,0)$ maps to $0$ and $f(infty,infty)$ maps to $1$



Where is it from? Just tuned the function manually. But I think this might suffice for your application



Explaining the idea



You want to have a feature that packs a lot of information:
- Is the usage greater than zero?
- Is it increasing or decreasing?
- If it is stalled, how much is the usage?



Well, your usage vary in integer values so you can map the entire non-changing but above 0 case to a previously non-used interval.



The function above will map in $[0,1]$ all non-changing possibilities, in a exponential kind of way ($a^(-frac1usage)$) also you can extract the actual value from positive changes and the approximate value for negative change (been a better approximation when the drop is high)



This is not the perfect scenario but it is the maximum information I could compress into 1 variable with little loss.






share|improve this answer











$endgroup$



Well, you want to identify change in usage you could try something like:



$$ f(day_1,day2) = fracday_2-day_1 + delta times Biggr|Biggr|fracday_2+day_1(day_2+day_1+1)(day_2-day_1+1)Biggl|Biggl| $$



where $delta$ is the eps of your machine (minimum value needed to be summed to differ it from other floats)



that will give you
$$f(100,0) approx -98.02$$
$$f(0,100) = 100$$
$$f(100,100) approx 0.995$$
$$f(0,0) = 0$$



You can look at my experiment here



This will map all non-changes from $[0,1]$ where $f(0,0)$ maps to $0$ and $f(infty,infty)$ maps to $1$



Where is it from? Just tuned the function manually. But I think this might suffice for your application



Explaining the idea



You want to have a feature that packs a lot of information:
- Is the usage greater than zero?
- Is it increasing or decreasing?
- If it is stalled, how much is the usage?



Well, your usage vary in integer values so you can map the entire non-changing but above 0 case to a previously non-used interval.



The function above will map in $[0,1]$ all non-changing possibilities, in a exponential kind of way ($a^(-frac1usage)$) also you can extract the actual value from positive changes and the approximate value for negative change (been a better approximation when the drop is high)



This is not the perfect scenario but it is the maximum information I could compress into 1 variable with little loss.







share|improve this answer














share|improve this answer



share|improve this answer








edited yesterday


























community wiki





3 revs, 2 users 98%
Pedro Henrique Monforte












  • $begingroup$
    I am not sure if it would answer --- "'''And I would want to capture even the scale of usage i.e., "A user sending 100 emails every day" should be different from "B user sending 10000 emails every day ""''---- part of the question. Could you please explain?
    $endgroup$
    – SSuram
    Apr 11 at 2:38










  • $begingroup$
    What would you say about adding the below info to it f = (((d2-d1+eps)/abs(d2-d1+eps))*abs((d2+d1)/(d1+d2+1)*(d2-d1+1)))*(d2/1000)*(d1/1000) where "1000"-- would be max(usage).
    $endgroup$
    – SSuram
    Apr 11 at 3:02











  • $begingroup$
    that will actually return zero for near every case
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 3:13
















  • $begingroup$
    I am not sure if it would answer --- "'''And I would want to capture even the scale of usage i.e., "A user sending 100 emails every day" should be different from "B user sending 10000 emails every day ""''---- part of the question. Could you please explain?
    $endgroup$
    – SSuram
    Apr 11 at 2:38










  • $begingroup$
    What would you say about adding the below info to it f = (((d2-d1+eps)/abs(d2-d1+eps))*abs((d2+d1)/(d1+d2+1)*(d2-d1+1)))*(d2/1000)*(d1/1000) where "1000"-- would be max(usage).
    $endgroup$
    – SSuram
    Apr 11 at 3:02











  • $begingroup$
    that will actually return zero for near every case
    $endgroup$
    – Pedro Henrique Monforte
    Apr 11 at 3:13















$begingroup$
I am not sure if it would answer --- "'''And I would want to capture even the scale of usage i.e., "A user sending 100 emails every day" should be different from "B user sending 10000 emails every day ""''---- part of the question. Could you please explain?
$endgroup$
– SSuram
Apr 11 at 2:38




$begingroup$
I am not sure if it would answer --- "'''And I would want to capture even the scale of usage i.e., "A user sending 100 emails every day" should be different from "B user sending 10000 emails every day ""''---- part of the question. Could you please explain?
$endgroup$
– SSuram
Apr 11 at 2:38












$begingroup$
What would you say about adding the below info to it f = (((d2-d1+eps)/abs(d2-d1+eps))*abs((d2+d1)/(d1+d2+1)*(d2-d1+1)))*(d2/1000)*(d1/1000) where "1000"-- would be max(usage).
$endgroup$
– SSuram
Apr 11 at 3:02





$begingroup$
What would you say about adding the below info to it f = (((d2-d1+eps)/abs(d2-d1+eps))*abs((d2+d1)/(d1+d2+1)*(d2-d1+1)))*(d2/1000)*(d1/1000) where "1000"-- would be max(usage).
$endgroup$
– SSuram
Apr 11 at 3:02













$begingroup$
that will actually return zero for near every case
$endgroup$
– Pedro Henrique Monforte
Apr 11 at 3:13




$begingroup$
that will actually return zero for near every case
$endgroup$
– Pedro Henrique Monforte
Apr 11 at 3:13

















draft saved

draft discarded
















































Thanks for contributing an answer to Data Science Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49088%2ffeature-engineering-suggestion-required%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

រឿង រ៉ូមេអូ និង ហ្ស៊ុយលីយេ សង្ខេបរឿង តួអង្គ បញ្ជីណែនាំ

Crop image to path created in TikZ? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)Crop an inserted image?TikZ pictures does not appear in posterImage behind and beyond crop marks?Tikz picture as large as possible on A4 PageTransparency vs image compression dilemmaHow to crop background from image automatically?Image does not cropTikzexternal capturing crop marks when externalizing pgfplots?How to include image path that contains a dollar signCrop image with left size given

Romeo and Juliet ContentsCharactersSynopsisSourcesDate and textThemes and motifsCriticism and interpretationLegacyScene by sceneSee alsoNotes and referencesSourcesExternal linksNavigation menu"Consumer Price Index (estimate) 1800–"10.2307/28710160037-3222287101610.1093/res/II.5.31910.2307/45967845967810.2307/2869925286992510.1525/jams.1982.35.3.03a00050"Dada Masilo: South African dancer who breaks the rules"10.1093/res/os-XV.57.1610.2307/28680942868094"Sweet Sorrow: Mann-Korman's Romeo and Juliet Closes Sept. 5 at MN's Ordway"the original10.2307/45957745957710.1017/CCOL0521570476.009"Ram Leela box office collections hit massive Rs 100 crore, pulverises prediction"Archived"Broadway Revival of Romeo and Juliet, Starring Orlando Bloom and Condola Rashad, Will Close Dec. 8"Archived10.1075/jhp.7.1.04hon"Wherefore art thou, Romeo? To make us laugh at Navy Pier"the original10.1093/gmo/9781561592630.article.O006772"Ram-leela Review Roundup: Critics Hail Film as Best Adaptation of Romeo and Juliet"Archived10.2307/31946310047-77293194631"Romeo and Juliet get Twitter treatment""Juliet's Nurse by Lois Leveen""Romeo and Juliet: Orlando Bloom's Broadway Debut Released in Theaters for Valentine's Day"Archived"Romeo and Juliet Has No Balcony"10.1093/gmo/9781561592630.article.O00778110.2307/2867423286742310.1076/enst.82.2.115.959510.1080/00138380601042675"A plague o' both your houses: error in GCSE exam paper forces apology""Juliet of the Five O'Clock Shadow, and Other Wonders"10.2307/33912430027-4321339124310.2307/28487440038-7134284874410.2307/29123140149-661129123144728341M"Weekender Guide: Shakespeare on The Drive""balcony"UK public library membership"romeo"UK public library membership10.1017/CCOL9780521844291"Post-Zionist Critique on Israel and the Palestinians Part III: Popular Culture"10.2307/25379071533-86140377-919X2537907"Capulets and Montagues: UK exam board admit mixing names up in Romeo and Juliet paper"Istoria Novellamente Ritrovata di Due Nobili Amanti2027/mdp.390150822329610820-750X"GCSE exam error: Board accidentally rewrites Shakespeare"10.2307/29176390149-66112917639"Exam board apologises after error in English GCSE paper which confused characters in Shakespeare's Romeo and Juliet""From Mariotto and Ganozza to Romeo and Guilietta: Metamorphoses of a Renaissance Tale"10.2307/37323537323510.2307/2867455286745510.2307/28678912867891"10 Questions for Taylor Swift"10.2307/28680922868092"Haymarket Theatre""The Zeffirelli Way: Revealing Talk by Florentine Director""Michael Smuin: 1938-2007 / Prolific dance director had showy career"The Life and Art of Edwin BoothRomeo and JulietRomeo and JulietRomeo and JulietRomeo and JulietEasy Read Romeo and JulietRomeo and Julieteeecb12003684p(data)4099369-3n8211610759dbe00d-a9e2-41a3-b2c1-977dd692899302814385X313670221313670221